# Compare Fine-tunes

When your model finishes fine-tuning, Entry Point AI will automatically start evaluating it against your validation examples.&#x20;

{% hint style="info" %}
Validation examples are processed using 0 for the temperature. This ensures you get the same output every time.
{% endhint %}

Entry Point AI will also score the resulting outputs using the [Scoring Method](https://docs.entrypointai.com/key-concepts/evaluation) selected under the project's Evaluation settings.

<figure><img src="https://3106255375-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F03h3SC78wyaaTZz7oUet%2Fuploads%2FHO8Qjlgtry1Hj3QR3vf3%2Fannotely_image%20(11).jpeg?alt=media&#x26;token=3de9dd99-6ede-4648-a3af-9e2c12d49ba3" alt=""><figcaption></figcaption></figure>

For classifiers, [Exact Match](https://docs.entrypointai.com/key-concepts/evaluation#exact-match) is typically a good choice and will automatically determine if the correct classification was chosen.

For generative outputs, choose [Manual](https://docs.entrypointai.com/key-concepts/evaluation#manual) or [Predictive](https://docs.entrypointai.com/key-concepts/evaluation#predictive).

To review the outputs and scores, open the fine-tuned model from the Models page.

<figure><img src="https://3106255375-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F03h3SC78wyaaTZz7oUet%2Fuploads%2FE4cAqhhF9Oy20XHuwznJ%2Fpage_1.jpeg?alt=media&#x26;token=9c9ef211-a098-47a8-b79d-e99f785debf8" alt=""><figcaption></figcaption></figure>

Then, scroll down to the Evaluation section.

<figure><img src="https://3106255375-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F03h3SC78wyaaTZz7oUet%2Fuploads%2FQep8Js6Fm9uiKnN7xTB1%2Fevaluation%20section.jpg?alt=media&#x26;token=a3bf2f8d-254e-4d5a-b65a-4aef912af7e3" alt=""><figcaption></figcaption></figure>

With manual scoring, you can go through each output and choose the most appropriate rating for its quality. At the end, you will be presented with an overall percentage score that can be compared to other templated or fine-tuned models.

Predictive scoring uses an LLM to automate the scoring process, making this much faster. It's still a good idea to do a human review of predicted output scores.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.entrypointai.com/guides/fine-tune-a-model/compare-fine-tunes.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
