Questions & Answers

Questions, Answered

Common questions about setup, API keys, and the founding pilot.
Don't see your answer? Email us directly.

GPT (OpenAI) Models — What You Need to Know

LampstandAI can optionally use OpenAI’s GPT API for sermon analysis. The GPT lineup is more complex than Gemini or Claude, with more model tiers available. Here’s a plain-language breakdown.


Current Flagship Family — GPT-5.4 (Released March 2026)

OpenAI’s current flagship is the GPT-5.4 family, released March 5, 2026, with Mini and Nano variants following on March 17. CloudZero

GPT-5.4 Nanogpt-5.4-nano

The fastest and cheapest option. Priced at $0.20 per million input tokens and $1.25 per million output tokens. Good for tagging, classification, and simple summaries. Not suitable for deep theological reasoning or complex sermon analysis. Best used for lightweight background tasks only. Price Per Token

GPT-5.4 Minigpt-5.4-mini

A solid middle-ground option. Priced at $0.75 per million input tokens and $4.50 per million output tokens. Handles real reasoning tasks well at a fraction of the flagship cost. A reasonable default for users who want meaningful AI analysis without paying premium prices. Mini handles FAQ-style responses and structured workflows effectively, cutting costs significantly compared to the standard tier. Price Per TokenCloudZero

GPT-5.4gpt-5.4

The standard flagship. Priced at $2.50 per million input tokens. Strong across reasoning, long-context handling, and complex analysis. A good choice for users doing serious AI-assisted sermon preparation, though at noticeably higher cost than Mini. BenchLM

GPT-5.5gpt-5.5

The latest and most capable GPT model as of May 2026. Strongest for complex coding, computer use, knowledge work, and research workflows. For sermon analysis purposes, the practical difference over GPT-5.4 is modest, and the additional cost may not be justified for most pastors. Recommended only for power users who want the absolute best available. Openai


What About Free Access?

Unlike Gemini, the GPT API has no free tier. All usage requires a paid API key and is billed per token consumed. There is no free plan for API-based applications.


A Note on Model Stability

GPT models follow a similar pattern to Gemini — as of March 11, 2026, GPT-5.1 models are no longer available. OpenAI retires older models as newer ones launch. However, named snapshot versions (e.g. gpt-5.4-2026-03-05) remain stable and do not change behavior after release, which gives developers a reliable way to lock in consistent performance. OpenAI Help Center


Which model should I choose?

For most sermon analysis tasks, GPT-5.4 Mini offers the best balance of capability and cost. It handles lectionary connections, pattern recognition, and AI-assisted preparation well, at a price point that scales reasonably with regular use.

If cost is your primary concern and you mainly need tagging and search, GPT-5.4 Nano is the most affordable option — but expect lighter results on complex theological analysis.

If you want the deepest analysis and are not concerned about cost, GPT-5.4 or GPT-5.5 will give you the most capable results.


How GPT Compares to Gemini and Claude for This App

GPT has no free tier via the API, which means it is a paid-only option regardless of which model you choose. In terms of analysis quality, GPT-5.4 and Claude Sonnet 4.6 are broadly comparable for sermon-related tasks. Gemini 2.5 Flash remains the most cost-effective option for free users, while GPT-5.4 Mini and Claude Sonnet 4.6 are both strong choices for paid users looking for reliable quality at a reasonable price.


Model information current as of May 3, 2026. OpenAI updates its model lineup frequently. We will keep this page updated as changes are announced.

Claude AI Models — What You Need to Know

LampstandAI can optionally integrate Anthropic’s Claude API for AI-assisted sermon analysis. Here’s a plain-language guide to each available model.

Available Models (All stable — safe for production)

Claude operates on a simpler, more stable model structure than Gemini. All three current models are fully released with no preview risk.

Claude Haiku 4.5claude-haiku-4-5-20251001

The fastest and most affordable Claude model. Pricing is $1 per million input tokens and $5 per million output tokens. Good for lightweight tasks such as tagging, short summaries, and quick lookups. Not well-suited for deep sermon analysis or complex theological reasoning. If cost is the primary concern, this is your starting point — but expect lighter results. Claude API Docs

Claude Sonnet 4.6claude-sonnet-4-6

The best combination of speed and intelligence. Priced at $3 per million input tokens and $15 per million output tokens. This is the recommended default for most sermon analysis tasks — lectionary connections, pattern recognition, and AI-assisted preparation. Fast enough for regular use, capable enough for meaningful results. Most users will find this model covers everything they need. Claude API Docs

Claude Opus 4.7claude-opus-4-7

The most capable generally available model for complex reasoning. Priced at $5 per million input tokens and $25 per million output tokens. Best for deep theological analysis, long sermon archive pattern recognition, and tasks where reasoning depth matters more than speed. Comparative latency is moderate — responses take a little longer, but the quality of analysis is noticeably higher. Recommended for users doing serious, sustained AI-assisted sermon preparation. Claude API DocsClaude API Docs


Key Differences from Gemini

Claude does not have a free tier via the API. All three models require a paid API key. However, Claude’s model lineup is notably more stable — models with the same snapshot date are identical across all platforms and do not change. There are no “preview” models to worry about, and no risk of sudden deprecation with two weeks’ notice. Claude API Docs

Claude also supports a larger context window. Both Opus 4.7 and Sonnet 4.6 support a 1 million token context window, which means the AI can hold an entire sermon archive in view when making connections — a significant advantage for long-term pattern recognition across years of preaching. Claude API Docs


Which model should I choose?

For most users, Claude Sonnet 4.6 is the right choice. It delivers strong analysis quality at a reasonable cost, with fast enough response times for regular use.

If you want the deepest possible theological reasoning and long-term sermon pattern analysis, choose Claude Opus 4.7.

If you are primarily doing quick tasks — tagging, searching, or short summaries — Claude Haiku 4.5 is the most cost-efficient option.


A note on data privacy

Unlike Gemini’s free tier, Claude API usage does not use your content to improve Anthropic’s products. For pastors who prefer that their sermon content remains private and is not used for AI training purposes, this is an important distinction worth noting. Claude API Docs


Model information current as of May 3, 2026. Anthropic updates its model lineup periodically. We will keep this page updated as changes are announced.

Gemini AI Models — What You Need to Know

LamstandAI uses Google’s Gemini AI to analyze and manage your sermons. You can choose which AI model powers your experience. Here’s what each model does and who it’s best for.


Stable Models (Recommended for all users)

Gemini 2.5 Flash Litegemini-2.5-flash-lite Available to free and paid users.

The fastest and most affordable model. Good for simple tasks like tagging and summarizing short text. However, it struggles with deep theological analysis, long Korean sermon texts, and complex AI instructions. This is the model the app selects automatically for free users due to cost, but it may not give you the quality you expect for sermon analysis.

Gemini 2.5 Flashgemini-2.5-flash Available to free and paid users. Recommended default.

The best balance of speed, cost, and quality. Handles most sermon analysis tasks well, including lectionary connections and pattern recognition. This is the model we recommend for most users, whether free or paid.

Gemini 2.5 Progemini-2.5-pro Available to paid users only.

The most capable model for deep reasoning. Best for complex theological analysis, long-context sermon pattern recognition, and AI memory features. Responses take a little longer, but the depth of analysis is noticeably better. Recommended if you plan to use AI-assisted sermon preparation regularly.


Preview Models (Not recommended for regular use)

Gemini 3.1 Pro Previewgemini-3.1-pro-preview Paid users only.

Gemini 3.1 Flash Lite Previewgemini-3.1-flash-lite-preview Free and paid users.

These are Google’s newest, most powerful models — but they carry a significant risk: Google can shut them down with only two weeks’ notice. In fact, the previous version (Gemini 3 Pro Preview) was shut down on March 9, 2026 with no extended grace period. We do not recommend using these models for your regular workflow until they reach stable status.


Which model should I choose?

If you are a free user, start with Gemini 2.5 Flash. It is available at no cost and gives you meaningful AI analysis quality.

If you are a paid user and want the best possible sermon analysis, choose Gemini 2.5 Pro.

If speed matters more than depth — for example, when quickly browsing your archive — Gemini 2.5 Flash Lite is a reasonable choice.

Avoid Preview models for your regular workflow until further notice.


Model information current as of May 3, 2026. Google updates its model lineup regularly. We will keep this page updated as changes are announced.

How do I get an API key?

Please refer to the following videos.

Claude API

OpenAI (GPT) API

Google Gemini API

Bonus — All three in one video

Where can I get an API key?

Anthropic (Claude) API

OpenAI (GPT) API

Google Gemini API