Quick Start
This guide picks up where Installation left off. You have ontogen in your build dependencies, a build.rs, and an empty src/schema/ directory. Let’s generate some code.
Define an entity
Section titled “Define an entity”-
Create
src/schema/task.rswith a simple Task entity:src/schema/task.rs use ontogen_macros::OntologyEntity;use serde::{Deserialize, Serialize};#[derive(Debug, Clone, Serialize, Deserialize, OntologyEntity)]#[ontology(entity, table = "tasks")]pub struct Task {#[ontology(id)]pub id: String,pub name: String,#[serde(default)]pub description: Option<String>,#[ontology(enum_field)]pub status: Option<TaskStatus>,pub created_at: String,}#[derive(Debug, Clone, Serialize, Deserialize)]pub enum TaskStatus {Todo,InProgress,Done,} -
Re-export it from
src/schema/mod.rs:src/schema/mod.rs mod task;pub use task::{Task, TaskStatus};
Let’s break down what the annotations mean:
#[derive(OntologyEntity)]makes the#[ontology(...)]attributes legal Rust. The derive macro itself does nothing — it expands to zero tokens. All interpretation happens inbuild.rsviasyn.#[ontology(entity, table = "tasks")]marks this struct as an Ontogen entity. Thetableattribute sets the SeaORM table name. If you omit it, Ontogen infers it from the struct name in snake_case (task).#[ontology(id)]marks the primary key field.#[ontology(enum_field)]tells generators this field holds an enum value, stored as a string in the database.- Fields without
#[ontology(...)]are plain data fields.Option<T>fields become nullable columns.Vec<String>fields (without arelationannotation) are stored as JSON.
Generate SeaORM entities
Section titled “Generate SeaORM entities”Update your build.rs to parse the schema and run the SeaORM generator:
use ontogen::CodegenError;
fn unwrap_codegen<T>(result: Result<T, CodegenError>, stage: &str) -> T { result.unwrap_or_else(|e| { e.emit_cargo_warning(); panic!("{stage}: {e}"); })}
fn main() { println!("cargo:rerun-if-changed=build.rs");
// Stage 1: Parse schema let schema = unwrap_codegen( ontogen::parse_schema(&ontogen::SchemaConfig { schema_dir: "src/schema".into(), }), "parse schema", );
// Stage 2: Generate SeaORM entities let seaorm = unwrap_codegen( ontogen::gen_seaorm( &schema.entities, &ontogen::SeaOrmConfig { entity_output: "src/persistence/db/entities/generated".into(), conversion_output: "src/persistence/db/conversions/generated".into(), skip_conversions: vec![], }, ), "generate SeaORM entities", );}Run cargo build. Ontogen creates several files:
src/persistence/db/entities/generated/task.rs — the SeaORM entity model:
//! Generated by ontogen. DO NOT EDIT.
use sea_orm::entity::prelude::*;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]#[sea_orm(table_name = "tasks")]pub struct Model { #[sea_orm(primary_key, auto_increment = false)] pub id: String, pub name: String, pub description: Option<String>, pub status: Option<String>, pub created_at: String,}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}src/persistence/db/conversions/generated/task.rs — conversion methods between your schema type and the SeaORM model:
//! Generated by ontogen. DO NOT EDIT.
impl Task { pub fn from_model(model: &entity::Model) -> Self { Self { id: model.id.clone(), name: model.name.clone(), description: model.description.clone(), status: model.status.as_deref().and_then(|s| s.parse().ok()), created_at: model.created_at.clone(), } }
pub fn to_active_model(&self) -> entity::ActiveModel { use sea_orm::Set; entity::ActiveModel { id: Set(self.id.clone()), name: Set(self.name.clone()), description: Set(self.description.clone()), status: Set(self.status.as_ref().map(|v| v.to_string())), created_at: Set(self.created_at.clone()), } }}Both directories also get a mod.rs that re-exports everything. Notice how enum_field caused the status column to be Option<String> in the SeaORM model, with the conversion handling to_string() and parse() automatically.
Add the store layer
Section titled “Add the store layer”The store layer generates CRUD methods and lifecycle hooks. Add it to build.rs:
// Stage 3: Generate Store layer let _store = unwrap_codegen( ontogen::gen_store( &schema.entities, Some(&seaorm), &ontogen::StoreConfig { output_dir: "src/store/generated".into(), hooks_dir: Some("src/store/hooks".into()), schema_module_path: ontogen::DEFAULT_SCHEMA_MODULE_PATH.into(), }, ), "generate store", );Passing Some(&seaorm) gives the store generator access to the SeaORM output metadata — exact table names, column mappings, junction table details. If you pass None, it infers these from naming conventions. Explicit is better.
schema_module_path is the Rust path generated code uses to import your schema types. Use the ontogen::DEFAULT_SCHEMA_MODULE_PATH constant (which expands to "crate::schema") so any future change to the convention propagates here automatically.
After building, you get:
src/store/generated/task.rs — CRUD methods on your Store type:
impl Store { pub async fn list_tasks(&self) -> Result<Vec<Task>, AppError> { ... } pub async fn get_task(&self, id: &str) -> Result<Task, AppError> { ... } pub async fn create_task(&self, task: Task) -> Result<Task, AppError> { ... } pub async fn update_task(&self, id: &str, updates: TaskUpdate) -> Result<Task, AppError> { ... } pub async fn delete_task(&self, id: &str) -> Result<(), AppError> { ... }}Each method calls lifecycle hooks at the right points:
pub async fn create_task(&self, mut task: Task) -> Result<Task, AppError> { hooks::before_create(self, &mut task).await?;
let active = task.to_active_model(); active.insert(self.db()).await.map_err(|e| AppError::DbError(e.to_string()))?;
let created = self.get_task(&task.id).await?; hooks::after_create(self, &created).await?; Ok(created)}src/store/hooks/task.rs — scaffolded hook file that you own:
//! Lifecycle hooks for Task.//!//! This file was scaffolded by ontogen. It is yours to edit.//! This file is NEVER overwritten by the generator.
pub async fn before_create(_store: &Store, _task: &mut Task) -> Result<(), AppError> { Ok(())}
pub async fn after_create(_store: &Store, _task: &Task) -> Result<(), AppError> { Ok(())}
pub async fn before_update(_store: &Store, _current: &Task, _updates: &TaskUpdate) -> Result<(), AppError> { Ok(())}
pub async fn after_update(_store: &Store, _task: &Task) -> Result<(), AppError> { Ok(())}
pub async fn before_delete(_store: &Store, _id: &str) -> Result<(), AppError> { Ok(())}
pub async fn after_delete(_store: &Store, _id: &str) -> Result<(), AppError> { Ok(())}This is where your business logic goes. Validate inputs in before_create. Send notifications in after_update. Prevent deletions in before_delete by returning Err. These files are scaffolded once and never overwritten — they’re yours.
A TaskUpdate struct is also generated with Option wrappers for partial updates:
#[derive(Debug, Clone, Default)]pub struct TaskUpdate { pub name: Option<String>, pub description: Option<Option<String>>, pub status: Option<Option<TaskStatus>>, pub created_at: Option<String>,}The double-Option on description lets callers distinguish “don’t change this field” (None) from “set this field to null” (Some(None)).
Write-if-changed optimization
Section titled “Write-if-changed optimization”Ontogen uses write_if_changed for all generated output. It reads the existing file content, compares it to the new content, and only writes when something actually changed. This matters because:
- Unchanged files keep their modification timestamps, so Cargo doesn’t recompile dependent crates.
- Your editor doesn’t show phantom diffs for files that weren’t modified.
- Incremental builds stay fast even with a large pipeline.
You can use this utility in your own build scripts too:
use ontogen::write_if_changed;// Only writes if content differs from what's on diskwrite_if_changed(&path, content.as_bytes())?;The full build.rs so far
Section titled “The full build.rs so far”Here’s everything assembled:
use ontogen::CodegenError;
fn unwrap_codegen<T>(result: Result<T, CodegenError>, stage: &str) -> T { result.unwrap_or_else(|e| { e.emit_cargo_warning(); panic!("{stage}: {e}"); })}
fn main() { println!("cargo:rerun-if-changed=build.rs");
// Stage 1: Parse schema let schema = unwrap_codegen( ontogen::parse_schema(&ontogen::SchemaConfig { schema_dir: "src/schema".into(), }), "parse schema", );
// Stage 2: Generate SeaORM entities + conversions let seaorm = unwrap_codegen( ontogen::gen_seaorm( &schema.entities, &ontogen::SeaOrmConfig { entity_output: "src/persistence/db/entities/generated".into(), conversion_output: "src/persistence/db/conversions/generated".into(), skip_conversions: vec![], }, ), "generate SeaORM entities", );
// Stage 3: Generate Store layer (CRUD + hooks) let _store = unwrap_codegen( ontogen::gen_store( &schema.entities, Some(&seaorm), &ontogen::StoreConfig { output_dir: "src/store/generated".into(), hooks_dir: Some("src/store/hooks".into()), schema_module_path: ontogen::DEFAULT_SCHEMA_MODULE_PATH.into(), }, ), "generate store", );}What’s next
Section titled “What’s next”This gets you database entities, CRUD methods, and lifecycle hooks from a single struct definition. But the pipeline goes further:
- Your First Entity — add relationships (belongs_to, has_many, many_to_many) and wire up the full pipeline including API and server transports.
- Lifecycle Hooks — fill in those hook stubs with real validation and side effects.
- Server Transports — generate HTTP handlers, Tauri IPC commands, or MCP tool definitions from your API surface.
- Client Generation — generate typed TypeScript clients that match your server endpoints exactly.