Entry Point AI Docs
  • Getting Started
    • Introduction
    • Quickstart
  • Key Concepts
    • Model Providers
    • Templates
    • Fields
    • Examples
    • Models
    • Evaluation
    • Transforms
  • Guides
    • Evaluate a Prompt
      • Turn a Prompt into a Template
      • Prepare Validation Examples
      • Create a Templated Model
      • Review & Rate Outputs
      • Iterating on the Prompt
    • Build a Dataset
      • Import a CSV
      • Transform Data
      • Synthesize Examples
      • Quantity & Quality of Data
    • Fine-tune a Model
      • Start a Fine-tune
      • Fine-tuning Hyperparameters
      • Compare Fine-tunes
      • Next Steps
    • Generate Completions
      • Playground
      • Inference Parameters
      • Shareable Link
      • Deploy a Model
Powered by GitBook
On this page
  1. Guides
  2. Evaluate a Prompt

Create a Templated Model

PreviousPrepare Validation ExamplesNextReview & Rate Outputs

Last updated 9 months ago

Now that you have a handful of validation examples and a template, it's time to create a for evaluation.

Make sure you have set up your OpenAI integration (or another ).

Navigate to the models tab and press +. In the dropdown that appears, choose "Apple a template."

In the dialog that appears, the template you created should be selected automatically.

You can choose any model. We'll go with GPT-4 Omni.

Leave the default name and press "Add to project."

Open the new templated model from its name in the table.

At the bottom of this detail view, you will see the Evaluation section.

After a moment, you should see the results appear.

In the next part, we will review the results and see how our model performed.

For the platform, choose OpenAI (or the of your choice).

Immediately, Entry Point will start your model against your validation examples.

model provider
evaluating
Model Provider
templated model