# Review & Rate Outputs

Our output results are ready in the Evaluation section:

<figure><img src="https://3106255375-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F03h3SC78wyaaTZz7oUet%2Fuploads%2FnJYXgCY9vpyPuBpwCFu4%2Freview%20and%20rate%20outputs.jpg?alt=media&#x26;token=913e89ce-e6d0-49df-b981-41a356d845a8" alt=""><figcaption></figcaption></figure>

Now, we need to rate each one.

We need to think about what makes a good email subject line for our use case.

On the one hand, they *are* email subject lines (that's a start), and the content seems relevant to the input. On the other hand, they have double quotes around them, which is not what we want. We'll subtract 2 stars for improper formatting, and rank each one from 1 to 3 stars based on how much we like the result.

{% hint style="danger" %}
You might notice that these results from GPT-4 Omni are eerily similar to the ones in our validation examples. That's because when writing this guide, we used GPT-4 Omni to come up with those email subject lines in the first place! If you selected a different model, you would get more diverse outputs.
{% endhint %}

To fix the formatting, we need to update our prompt.
