Jaroslav Janda5 min

From iOS to Android in One AI Prompt: A Native Porting Experiment

iOSEngineeringMar 26, 2026

iOSEngineering

/

Mar 26, 2026

Button displaying "Change iOS to Android" with a pointing hand icon and a red selection indicator on a dark background with code snippets.
Jaroslav JandaiOS Developer

Share this article

TL;DR

A fully working Android app, built from a single AI prompt, in under four hours. STRV gave an AI coding agent detailed context about an existing iOS app and a clear set of rules. The result was a clean native app port, matching the original feature for feature with solid architecture from the start. The full repo is open source at github.com/strvcom/mobile-agentic-brew.

We had a question: What if you could take an existing iOS app and transport it to Android using a single, well-crafted prompt?

Not a cross-platform framework. Not a manual rewrite. An AI coding agent with full context about the source app, producing a native Kotlin + Jetpack Compose implementation, 1:1 feature parity, idiomatic code and production-ready structure.

So we tried it. And it worked better than expected.

What App Did We Port?

We build native iOS and Android apps daily. Our iOS team has a mature project template with SwiftUI, coordinators, SwiftData persistence and all the tooling you'd expect: SwiftLint, SwiftFormat, Tuist, Fastlane, CI/CD. It's battle-tested.

We picked a real internal project called Chemex Coach. It’s a coffee brewing timer app built on top of this template. It has a brew setup screen with dose/ratio controls and brew math, a guided timer with step-by-step checkpoints, a brew summary with rating and notes, session history with grouping and detail views and a full settings screen. It's still a relatively simple app, but it has enough moving parts (timers, persistence, haptics, animations) to make the porting non-trivial.

The iOS version was itself built with AI assistance (Codex 5.3) through a series of incremental prompts. Once it was solid and tested, we moved to the main experiment.

The iOS App: Our Starting Point

iOS Brew Setup · iOS Brew Timer · iOS Brew Summary

The Approach: One AI Prompt to Rule Them All

The core hypothesis was simple. If you give an AI agent:

  1. Full read access to the existing iOS codebase (the source of truth)
  2. A detailed spec document describing every screen, data model, timer rule and edge case
  3. Clear architectural constraints for the target platform (Kotlin + Jetpack Compose, Room, DataStore, ViewModel + StateFlow)
  4. An explicit "no extras" rule to match iOS behavior exactly: don't add features, don't redesign

Then it should be able to produce a working native Android app.

We wrote a single, comprehensive prompt (~2,000 words) that defined five phases: restructure the repo into a monorepo; document the iOS app into a feature-parity checklist; build the Android app with 1:1 parity; capture screenshots and build a presentation website (vanilla HTML/CSS/JS only); and keep docs consistent.

Each phase had specific constraints. For example, the Android phase specified: Kotlin + Jetpack Compose, Navigation Compose with bottom tabs, ViewModel for state, Room for sessions + DataStore for preferences, the elapsed-time timer model (not tick counting), haptics/sound respecting settings and screen-awake behavior during brewing.

The prompt also explicitly called out edge cases: stop/discard/save confirmations, background/foreground lifecycle and accessibility semantics.

No .md instruction files. No custom skills or plugins. Just a well-structured prompt fed into Codex 5.3, high mode.

What Did the AI Agent Produce?

The agent produced 32 Kotlin files totaling ~3,800 lines of code — remarkably close to the iOS source (~50 Swift files, ~3,800 lines for the ChemexCoach feature).

android/app/src/main/java/com/strv/chemexcoach/

■■■ data/ (Repository, Room DAO, Database, SettingsStore)

■■■ domain/ (BrewEngine, BrewMath)

■■■ model/ (BrewInputs, BrewPlan, BrewSession, BrewSettings, BrewStep)

■■■ navigation/ (ChemexNavHost, ChemexRoutes)

■■■ ui/ (brew/setup, brew/timer, history, settings, common, theme)

The architecture follows standard Android conventions: ViewModel + StateFlow for state management, Navigation Compose with bottom tabs, Room for brew history, DataStore preferences for settings. The timer engine mirrors iOS timing semantics: start timestamp + accumulated pause duration, deterministic step transitions, 250ms tick cadence.

Beyond the Android app, the agent also reorganized the repo into a monorepo, wrote architecture docs and a feature-parity checklist and built a vanilla HTML/CSS/JS presentation site.

What Went Right

The architecture was clean. The separation into data/, domain/, model/, ui/ and navigation/ packages follows exactly what you'd expect from a senior Android engineer, not an AI coding agent on its first pass. Repository pattern with interface + implementation, Room DAO with proper entity mapping, DataStore for lightweight preferences.

Timer correctness was solid. The BrewEngine correctly implements the elapsed-time model with pause/resume support, skip-to-next-step and proper finish conditions. This is the kind of thing that's easy to get wrong, and the agent got it right on the first pass.

Feature parity was real. The generated checklist document mapped every screen, every interaction and every edge case. The Android implementation followed it closely: same navigation flow, same data models, same confirmations.

Side by Side: iOS vs. Android

iOS Brew Setup · Android Brew Setup


What Needed Fixing

It's not pixel-perfect, and we want to be upfront about that.

Functionally, everything (timer logic, persistence, navigation, settings) worked out of the box. One crash surfaced when rapidly hammering the ratio slider (state backpressure + floating-point rounding). A second prompt fixed it: debouncing writes, proper clamping, stable recomposition. Done.

Visually, there are minor differences: the “Start Brew” button lacks the play icon and the ratio label formatting varies (1:16 vs. 1:16.0). Each is a matter of minutes to fix with a follow-up prompt, not hours of engineering work.

But that's the whole point. The core conversion (architecture, business logic, state management, persistence, navigation) just works. The alternative? A dedicated Android engineer, weeks of work, back-and-forth on feature parity. Here, the iOS-to-Android port took under four hours and zero API cost: everything ran locally. The visual polish? A couple more prompts. The functionality? That was there from prompt one.

Porting Results 

The app design was created in Stitch and the iOS app icon was generated via a Gemini prompt. A small detail, but worth noting, is that the design workflow was also AI-assisted end to end.

What Does AI App Porting Mean for Mobile Teams?

Let's be clear: this is a proof of concept. A first hypothesis. This is what agentic engineering looks like in practice: defining the right constraints, giving the agent full context and letting it build.

What it shows is that the native app porting step (taking a well-documented app and creating an idiomatic native equivalent on another platform) is something AI agents handle remarkably well given proper context and constraints. The output wasn't a rough prototype; it was structurally what a mid-to-senior engineer would produce.

The key insight: prompt quality matters more than anything. A vague “port this to Android” wouldn’t have worked. This is prompt engineering for mobile: the detailed spec, clear constraints and the “no extras” rule made the difference.

What Comes After a Successful AI Port?

This is just the beginning. The experiment raised a bunch of follow-up questions we're excited to explore:

  • Can we close the loop? Right now, a human reviews the output, tests it and writes fix-up prompts. What if the agent could run the app, capture screenshots, compare them against iOS and iterate autonomously?
  • What about more complex apps? Chemex Coach is a real app, but it's a single-feature MVP. What happens with networking, auth flows, deep linking, push notifications?
  • Can we reduce the human bottleneck further and make the workflow fully agentic? Most of the human time was spent waiting for the AI agent, validating outputs and iterating on prompts (we used ChatGPT to help sharpen the prompts). Can we automate the validation loop so the agent self-corrects?

If this topic interests you, let us know. We're planning a follow-up where humans aren't the bottleneck.

How to Try This Yourself

The entire repo is open source. Prompts are right there in the git commit messages: github.com/strvcom/mobile-agentic-brew

Every commit message is the actual prompt that was used to generate that change. The history reads like a conversation between an engineer and an AI agent because that's exactly what it is.

Share this article


Sign up to our newsletter

Monthly updates, real stuff, our views. No BS.