Build an app with SwiftUI
Table of contents
Learn how to build a SwiftUI app that uses Replicate to run a machine learning model.
By the end, youâll have an app that can run on iOS and macOS that generates images from text prompts using Stable Diffusion.
Prerequisites
- Xcode: Youâll need Xcode to build and run your app. Download the latest version of Xcode from developer.apple.com.
- A Replicate account: Youâll use Replicate to run machine learning models. Itâs free to get started, and you get a bit of credit when you sign up. After that, you pay per second for your usage. See how billing works for more details.
1. Create the app
SwiftUI is a framework for building native apps for Apple devices. Itâs a great choice for getting something up and running fast, and is well-suited to prototyping ideas with Replicate.
Open Xcode and create a new project by selecting âFileâ > âNewâ > âProjectâŠâ. (â§âN).
Under âMultiplatformâ select the âAppâ template and click âNextâ. Give your app a name, such as âReplicateExampleâ, and click âNextâ. Then save your project to a working directory.
Nowâs a good time to make sure everything is working as expected. In Xcode, select âProductâ > âRunâ (âR) build and run the app on your device or simulator.
If you see a âHello, world!â message, youâre ready to move on to the next step.
2. Add Replicateâs Swift package dependency
Use the official Swift package to run machine learning models on Replicate from your app.
In Xcode, select âFileâ > âAdd packagesâŠâ.
Copy https://github.com/replicate/replicate-swift
and paste it into the search bar.
Select replicate-swift
from the list and
click the âAdd Packageâ button.
Once Xcode finishes downloading the package, youâll be prompted to choose which products to add to your project. Select Replicateâs library and add it to your example app target.
3. Configure your app
Enable network access for your app so that it can connect to Replicate.
In project settings, select the âReplicateExampleâ target, then select the âSigning & Capabilitiesâ tab. Under âApp Sandboxâ, check the box next to âOutgoing Connections (Client)â.
4. Set up Replicateâs client
Now itâs time to write some code.
In the Project Navigator, open the ContentView.swift
file.
Add the following code to the top of the file,
replacing <#token#>
with your API token.
import Replicate
private let client = Replicate.Client(token: <#token#>)
For this example, weâre hard-coding the API token in the app. But this is just to help you get started quickly, and isnât recommended for production apps. You shouldnât store secrets in code or any other resources bundled with your app. Instead, fetch them from CloudKit or another server and store them in the Keychain.
For more information, consult Appleâs documentation for CloudKit and the Keychain:
5. Define the model
Models on Replicate have typed inputs and outputs, so itâs convenient to define a Swift type for each model your app uses.
In ContentView.swift
, add the following code:
// https://replicate.com/stability-ai/stable-diffusion
enum StableDiffusion: Predictable {
static var modelID = "stability-ai/stable-diffusion"
static let versionID = "db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf"
struct Input: Codable {
let prompt: String
}
typealias Output = [URL]
}
Predictable
is a protocol that defines a common interface for all models.modelID
is the ID of the model we want to run â in this case, âstability-ai/stable-diffusionâ for Stable DiffusionversionID
is the ID of the version of the model we want to run. Here, weâre using the latest version at the time of writing.Input
andOutput
define the types of the modelâs input and output. In this case, the input is a struct with aprompt
property, and the output is a list of URLs to the generated images. (Stable Diffusion has additional inputs, including an option for how many images to generate, but weâre keeping things simple for this example.)
Next, add a prompt
and a prediction
property to ContentView
,
and define generate()
and cancel()
methods:
struct ContentView: View {
@State private var prompt = ""
@State private var prediction: StableDiffusion.Prediction? = nil
func generate() async throws {
prediction = try await StableDiffusion.predict(with: client,
input: .init(prompt: prompt))
try await prediction?.wait(with: client)
}
func cancel() async throws {
try await prediction?.cancel(with: client)
}
// ...
The generate()
method creates a prediction and waits for it to complete.
Because Prediction
is a value type,
the UI will automatically update when the prediction completes.
6. Implement the rest of ContentView
Finally, wire up the UI to call these methods and display the generated image.
The content viewâs body has a Form
with a Section
containing a TextField
.
When the user types text into this field and submits the form,
that text will be used to create a prediction by the generate()
method.
var body: some View {
Form {
Section {
TextField(text: $prompt,
prompt: Text("Enter a prompt to display an image"),
axis: .vertical,
label: {})
.disabled(prediction?.status.terminated == false)
.submitLabel(.go)
.onSubmit(of: .text) {
Task {
try await generate()
}
}
}
Under the text field is a conditional block that renders the prediction from the time itâs created until it finishes.
starting
andprocessing
: Show an indeterminate loading indicator as well as a button to cancel the prediction.succeeded
: Show the generated image using anAsyncImage
component.failed
: Show an error message.canceled
: Show a status message to the user.
The ZStack
acts as a placeholder to keep everything in place
while waiting for the prediction to finish.
if let prediction {
ZStack {
Color.clear
.aspectRatio(1.0, contentMode: .fit)
switch prediction.status {
case .starting, .processing:
VStack{
ProgressView("Generating...")
.padding(32)
Button("Cancel") {
Task { try await cancel() }
}
}
case .succeeded:
if let url = prediction.output?.first {
VStack {
AsyncImage(url: url, scale: 2.0, content: { phase in
phase.image?
.resizable()
.aspectRatio(contentMode: .fit)
.cornerRadius(32)
})
ShareLink("Export", item: url)
.padding(32)
}
}
case .failed:
Text(prediction.error?.localizedDescription ?? "Unknown error")
.foregroundColor(.red)
case .canceled:
Text("The prediction was canceled")
.foregroundColor(.secondary)
}
}
.frame(maxWidth: .infinity, maxHeight: .infinity, alignment: .center)
.padding()
.listRowBackground(Color.clear)
.listRowInsets(.init())
}
7: Create a prediction
Your app should be ready to use now! In Xcode, select âProductâ > âRunâ (âR) to run the app locally.
Next steps
Huzzah! You should now have a working app thatâs powered by machine learning.
But this is just the start. Here are some ideas for what you can do next:
đ Show your friends what youâve built.
đ Before you go too much further, make sure to set up CloudKit to securely store your API key, as you definitely donât want to commit it to source control.
đ Integrate a super resolution model into your new app to upscale the generated images to a higher resolution.
đ€ Explore other models on Replicate and integrate them into your app.
âïž Update the README if youâre planning to open-source your project so others know how to use it and contribute.