flutter

July 20, 2025 flutter ai

Welcome to dartantic_ai 1.0!

Welcome to dartantic_ai 1.0!

Dartantic is an agentic framework designed to make building client and server-side apps in Dart with generative AI easier and more fun!

It works across providers (Google, OpenAI, Anthropic, etc) and runs anywhere your Dart code runs (Flutter desktop, Flutter mobile, Flutter web, CLI, server).

It allows you to write code like this:

// Tools that work together
final tools = [
  Tool(
    name: 'get_current_time',
    description: 'Get the current date and time',
    onCall: (_) async => {'result': DateTime.now().toIso8601String()},
  ),
  Tool(
    name: 'find_events',
    description: 'Find events for a date',
    inputSchema: JsonSchema.object({
      'date': JsonSchema.string(),
    }),
    onCall: (args) async => ..., // find events
  ),
];

// Agent chains tools automatically, no matter what provider you're using,
// e.g. openai, google, openrouter or your custom provider. And if you want to
// specify the model, you can, e.g. "openai:gpt-4o", "google:gemini-2.5-flash" or
// "your-provider:your-model".
final agent = Agent('openai', tools: tools);
final result = await agent.send('What events do I have today?');

// Agent will:
// 1. Call get_current_time to figure out what "today" means
// 2. Extract date from response
// 3. Call find_events with that date
// 4. Return final answer with events

I had all of that working with Gemini and OpenAI LLMs three weeks ago. I just needed to add support for a few more providers and I’d be ready for a 1.0. So I did what anyone would do: I spent three weeks rebuilding dartantic from first principles.

Building on langchain_dart

It was three weeks ago when I first really dove into the most excellent langchain_dart repo from David Miguel Lozano. And when I did, I discovered that he was way ahead of me with features AND providers. There was a lot of Langchain stuff in there of course — David had been very thorough — but it also had a lovely compatibility layer over the set of LLM provider-specific Dart SDK packages (which David also built and maintained). So, on the day after I launched dartantic 0.9.7 at FlutterCon in New York, I sat down with Claude Code and carved my way into David’s Langchain implementation, chipping away until I had extracted that compat-layer.

And on top of that, I built dartantic_ai 1.0.

As you can see from the most epic CHANGELOG ever, I learned a ton from David along the way, including:

  • to use Dart types for typed output on the Agent.sendFor<TOutput> method instead of on the Agent itself so each LLM response can have it’s own type
  • to use Dart types for typed input on tool calls on the parameterized Tool<TInput> type itself
  • to use a parameterized model options parameter so each model can be created in a generic way, but also support provider-specific typed model options
  • to expose a set of static provider instances, e.g. Providers.openai, Providers.anthropic, etc. to make it easy to just grab one without using string names if you don’t want to
  • to expose usage tracking
  • to handle embeddings in chunks
  • and so many other tiny details that just makes dartantic better!

David’s langchain base allowed me to build support for 11x providers, 5x native (Mistral, Anthropic, Google, OpenAI and Ollama) and 6x more OpenAI-compatible configurations (Together, Cohere and Lambda as well as Ollama and Google configurations for their OpenAI-compatible endpoints). All 11x providers handle chat and 5x of them handle embeddings. I started with more OpenAI-compatible configurations, but their implementations were either so weak or so flakey or so both (I’m looking at you, Nvidia) that I dropped them — they couldn’t pass the more than 1100 tests I built out to test dartantic’s support for them. But feel free to drop in your own!

Industrial Strength

On top of David’s langchain work, I then built out a lot of new features for dartantic, including:

  • custom providers that participate in the named lookup just like the built-in providers
  • typed output
  • typed tool input
  • typed output WITH tool calls WITH streaming (progressive JSON rendering anyone?)
  • multi-provider chat-compatible message format
  • thorough logging w/ easy setup and filtering
  • usage tracking
  • and more…

You can see the nitty gritty in the dartantic docs.

What’s Next

I’ve separated out the core dartantic interfaces so that you can build a dartantic provider without depending on all of dartantic and so that I can make sure that dartantic continues to run everywhere that Dart runs. I’m working with the nice folks at Cactus to get their enterprise-grade local mobile-device-optimized LLMs into dartantic as a custom provider. I also want to get a provider for firebase_ai in there for my Flutter peeps who don’t want to mess with API keys in their client apps.

Of course, anyone that wants to can build a dartantic provider. Let me know if you do! I’d love to track them in the docs.

I also have plans to support image generation and audio transcription, as well as the new OpenAI Responses API and context caching to reduce token usage.

And I have big dreams for a dartantic builder that translates Dart types into JSON serialization and JSON schema for you automatically, streamlining the agent creation considerably:

@Agent()
class TacAgent extends Agent {
  TacAgent(super.model);

  @Run()
  Future<TownAndCountry> run(String prompt) => _$TownAndCountryAgentRun(prompt);
  
  @Tool()
  Future<DateTime> getCurrentDateTime() => DateTime.now();
}

I’m tracking my ideas for the future of dartantic on GitHub. Feel free to add your own.

Where Are We

My goal with dartantic isn’t for me to be a one-man band. The idea is that dartantic can grow with the AI needs of the Dart and Flutter community, while maintaining its principles of multi-provider support, multi-platform support and fun!

Want to steer where dartantic goes? Hate something and want it fixed? Get involved! Here’s how:

If you’re building AI-powered apps in Dart or Flutter, give dartantic a try. Switch between providers. Use typed output. Make tool calls. Build agents. Break things. Swear at it. Then come tell me what went wrong.

Welcome to dartantic 1.0. Let’s go break some stuff together.

July 1, 2025 flutter

Critical Flutter OSS Projects Need Love! (aka Funding)

Critical Flutter OSS Projects Need Love! (aka Funding)

In a recent tweet, Dinko makes an excellent point about much of our Dart ecosystem being comprised of hobby project[s]” and having limited support.” He’s not wrong and that applies to Flutter as well. Most of the Flutter ecosystem is driven by passionate OSS developers. And that passion drives different devs to do different things — some like the code, some like the samples, some like the docs, some like the support and some just like to publish a package right out of flutter create and then move on with their day.

Most Dart packages are a hobby project or have limited support

It is unfortunately true that few packages get professional-quality care. For that, you typically need money.

Flutter Favorites

The Flutter Favorite program was invented to give recognition to developers building high-quality packages that really serve the Dart and Flutter community. And it worked. When I was on the team, the Flutter Favorites program contributed to both the number of packages on pub.dev and the average quality to increase. These are not metrics that typically go together.

Unfortunately, I believe that we’ve reached a quality plateau on pub.dev. And I think I know why.

The Need for Flutter Funding

One of the jobs of the Flutter Favorites committee was to provider encouragement and help for package authors on the cusp of meeting the quality bar for a Flutter Favorite candidate package. As one example, I remember reaching out to the author of a popular BLE package that needed some work. There was some missing functionality and the overall package didn’t meet the quality bar, but it was a great start and clearly filling a real need in the community. The author told me that he’d love to be a Flutter Favorite, but he was too busy making a living to dedicate the time. If we had perhaps had a small amount of funding…

Unfortunately, there was no funding for such things at the time. And except for the FlutterFlow Flutter Funding program (which sounds great, btw, but may be biased towards projects that overly favor FlutterFlow), there is no broadly-focused organization whose job it is to fund engineering projects in the Dart and Flutter ecosystem.

I would love to see more of this – Companies resurrecting important, under maintained packages.

As Eric Seidel pointed out recently, many critical packages in our ecosystem are showing their age. When you look at the data — packages like google_sign_in with 31 open issues, flutter_barcode_scanner with 201 open issues and no updates in 3 years — it’s clear that we have a sustainability problem. The community is willing to help, but maintainers need time, and time costs money.

I have an idea.

Proposal: Flutter Funding Committee

I’ve been working on a proposal to create the Flutter Funding Committee (FFC) — a non-profit organization designed to fill exactly these funding gaps. The idea is simple: take donations from companies deeply invested in the flourishing Dart and Flutter ecosystem and use that money to fund important community projects that would otherwise fall through the cracks. Think of it as an escape valve” that allows the Dart and Flutter team to move maintenance work off their plate without dropping any balls (to shake a couple of metaphors together with a swizzle stick).

The committee would fund things like:

  • Maintenance and improvement of critical pub.dev packages that are widely used but under-resourced
  • Bug fixes and improvements in Flutter/Dart that are important to the broader ecosystem but not on the current roadmap
  • New tools, packages or plugins that fill gaps in the ecosystem
  • Bounties for targeted issues or enhancements requested by the community

But here’s the key differentiator from similar foundations like Python Software Foundation or the OpenJS Foundation: the FFC explicitly would NOT take over technical governance. The FCC would be purely a funding mechanism to support the ecosystem around that core.

Projects We Could Fund

So what would this look like in practice? Let me give you some concrete examples of the kinds of projects the FFC could tackle:

Emerging Tech Integration: Want to see Flutter work better with emerging technologies? Let’s fund someone to build proper WebAssembly integration, or an agentic toolkit like the cool Python kids have, but for Dart!

Critical Package Maintenance: Remember that BLE package I mentioned? Or google_sign_in. Or another package near and dear to my heart go_router. It’s currently in maintainence mode, but 257 (!) open issues with almost 2M downloads. This is a package in need of some love.


go_router in maintenance mode


Desktop Feature Gaps: Flutter’s desktop support is solid but still has gaps that the core team hasn’t prioritized. Things like system-level menus, OS integration features or platform-specific UI components that would make Flutter desktop apps feel truly native.

And the list goes on…

The beauty of the FFC approach is that the companies funding these projects would get to vote on what gets priority. If you’re a company building Flutter apps for enterprise customers, you might prioritize desktop features. If you’re focused on mobile, maybe you care more about social auth. The committee would work together to decide.

Where Are We?

This isn’t just a pipe dream. I’ve already had conversations with key players in the Flutter ecosystem, and the response has been larely positive (although nobody has written a check yet : ). Companies that depend on Flutter for their business understand the value proposition immediately — instead of each of them solving ecosystem problems in isolation, they can pool resources and tackle the big issues together.

What I find most exciting about this approach is that it builds on the proven success of the Flutter Favorites program. We know that recognition and quality standards work. Now we’re just adding the missing piece: funding to make those standards achievable for maintainers who care but can’t afford to work for free.

The Flutter ecosystem has grown up. We’ve got millions of developers, thousands of companies betting their businesses on Flutter, and a package ecosystem that rivals any platform out there. It’s time our funding mechanisms grew up too.

If you’re interested in this idea — whether as a potential funding company, a maintainer who could benefit, or just someone who cares about Flutter’s long-term health — I’d love to hear from you. Because the future of Flutter isn’t just about what Google builds; it’s about what we build. Together.

May 28, 2025 flutter ai

Flutter AI Tool Calling

Flutter AI Tool Calling

A little while ago, I was inspired by Thorsten’s blog post on building an AI Agent using Rust to build an AI Agent using Dart. The combination of a conversation with Gemini and a set of tools allowed us to build an agent that could take some prompts from the user and turn them into not just responses (Ask mode), but actions (Agent mode!). In the spirit of Agentic Apps month for Flutter this month, I wanted to share how to do the same thing in your Flutter app using the most recent release of the Flutter AI Toolkit.

Flutter AI Toolkit v0.9.0

As we near the 1.0 release of the AI Toolkit, the community has continued to contribute features that they’d like to see in a customizable, style-able and LLM-pluggable widget you can use when you’d like to enable your users to be able to talk to an AI in the context of your app.

In this case, Toshi Ossada contributed a PR that provided the inspiration for tool calling for the new FirebaseProvider in the AI Toolkit. This new provider replaces both the GeminiProvider and the VertexProvider as described in the migration guide for v0.9.0.

Tool calling is the ability to augment an LLM with a set of functions — what the AI industry refers to as tools” — that the LLM can call when it needs the data that the tools provide. For example, an LLM by itself has no idea what time it is; it needs to have some tool that can provide that information and if it doesn’t have one, it’ll just make something up. With confidence.

Here’s an example of how to provide tools to the FirebaseProvider:

class ChatPage extends StatelessWidget {
  const ChatPage({super.key});

  @override
  Widget build(BuildContext context) => Scaffold(
    appBar: AppBar(title: const Text(App.title)),
    body: LlmChatView(
      provider: FirebaseProvider(
        model: FirebaseAI.googleAI().generativeModel(
          model: 'gemini-2.0-flash',
          tools: [
            Tool.functionDeclarations([
              FunctionDeclaration(
                'get_temperature',
                'Get the current local temperature',
                parameters: {},
              ),
              FunctionDeclaration(
                'get_time',
                'Get the current local time',
                parameters: {},
              ),
            ]),
          ],
        ),
        onFunctionCall: _onFunctionCall,
      ),
    ),
  );
  
  ... // _onFunctionCall goes here...
}

This code initializes the model with two tools: get_temperature and get_time. These tools come with names and descriptions so that the model can understand what they’re for and make an informed decision about when to call them.

The model is also initialized with an onFunctionCall callback so that when the LLM wants to use one of those tools, your function is called to handle it:

class ChatPage extends StatelessWidget {
  ...

  // note: we're not actually calling any external APIs in this example
  Future<Map<String, Object?>?> _onFunctionCall(
    FunctionCall functionCall,
  ) async => switch (functionCall.name) {
    'get_temperature' => {'temperature': 60, 'unit': 'F'},
    'get_time' => {'time': DateTime(1970, 1, 1).toIso8601String()},
    _ => throw Exception('Unknown function call: ${functionCall.name}'),
  };
}

We’re just returning hard-coded values here, but this is the place where you’d look up the data that the LLM wants as part of fulfilling the user’s request, as shown here:

In this example, we’re just looking up information that the LLM would have trouble getting on it’s own. However, if a call to a tool has a side affect then BOOM you’ve moved from Ask mode to Agent mode. Welcome to the future!

Flutter + Genkit + Interrupts (oh my!)

This all works great so long as everything is handled on the client-side. However, as soon as you mix in server-side LLMs, for example by using Genkit, you have some additional considerations to take into account.

Genkit is an open-source framework for building full-stack AI-powered applications, developed and used in production by Google. It currently has language support for Typescript/Javascript, Go and Python but unfortunately not Dart. However, if you’re willing to write some Typescript, it turns out that Genkit has great support server-side tools handled by Flutter apps with something called interrupts.

A Genkit interrupt is a tool that’s invoked on the server but fulfilled on the client.

How can that work?” I’m hearing you ask through the Interwebtubes.

Well let me tell you.

The way an LLM tool works is that during the handling of a request, if there’s a tool involved, the LLM API will call back into the function you provided via a callback like onFunctionCall . That call might take a while, e.g. you may need to dial up the national weather service, but when, the Future will complete and the LLM will be able to carry on.

That’s great,” you say. But how do I stretch a function callback over the wire from a Genkit server to a Flutter client?” You ask such good questions.

Well, as it turns out, tool calls being invoked in the middle of an LLM response is an API fiction. What’s really happening is that when an LLM wants to call a tool, it replies with a message marked as a tool call and that include the tool arguments. The LLM client library — like the firebase_ai package — notices this, calls your callback function, bundles up the response and continues the conversation without bothering you about it at all until the actual LLM response comes back, having taken into account the results of the tool call(s).

For example, the user’s request above in a single prompt looks like it returns a single response, but looking at the actual message history tells a different story:

[
  {
    "role": "user",
    "parts": [
      {
        "text": "what's the time and temperature?"
      }
    ]
  },
  {
    "role": "model",
    "parts": [
      {
        "functionCall": {
          "name": "get_time",
          "args": {}
        }
      },
      {
        "functionCall": {
          "name": "get_temperature",
          "args": {}
        }
      }
    ]
  },
  {
    "role": "function",
    "parts": [
      {
        "functionResponse": {
          "name": "get_time",
          "response": {
            "time": "1970-01-01T00:00:00.000"
          }
        }
      },
      {
        "functionResponse": {
          "name": "get_temperature",
          "response": {
            "temperature": 60,
            "unit": "F"
          }
        }
      }
    ]
  },
  {
    "role": "model",
    "parts": [
      {
        "text": "OK. The current time is 1970-01-01T00:00:00.000 and the temperature is 60 degrees Fahrenheit."
      }
    ]
  }
]

OK,” you say warming up to another question, but what’s that got to do with Genkit and server-side tool calls?” I’m getting there!

Genkit Tools

Genkit also provides for tool calls, as shown in this example:

// define a tool
const getWeather = ai.defineTool(
  {
    name: 'getWeather',
    description: 'Gets the current weather in a given location',
    inputSchema: z.object({
      location: z.string().describe('The location to get the current weather for'),
    }),
    outputSchema: z.string(),
  },
  async (input) => {
    // Here, we would typically make an API call or database query. For this
    // example, we just return a fixed value.
    return `The current weather in ${input.location} is 63°F and sunny.`;
  },
);

// use a tool
const response = await ai.generate({
  prompt: "What is the weather in Baltimore?",
  tools: [getWeather],
});

This is semantically the same — we define a tool that the LLM can call during a request. And again, it looks like a seamless callback, which you need to implement on the server, even though we know there is a flow of messages underneath just like what we see above.

But what if you could interrupt the flow of messages when there’s a tool call, pass the stack of messages back to the Flutter app and let it fill in the results? That’s exactly what Genkit interrupts are for.

Genkit Interrupts: Human in the Loop

From the Genkit docs: Interrupts are a special kind of tool that can pause the LLM generation-and-tool-calling loop to return control back to you. When you’re ready, you can then resume generation by sending replies that the LLM processes for further generation.”

As an example, imagine that you’ve got an app that helps people with their plants, maybe expanding their garden or diagnosing their sick plants.

Further imagine that you’ve got an LLM in your server-side code with access to a database of products that can help users with their plant needs.

Now imagine that the LLM has been instructed to ask the user a set of questions to clarify the plant needs before recommending one of those products.

Since the LLM is running in your Genkit server with access to your server-side database of products, to involve the user of your Flutter app in a conversation, you’ve now got the perfect storm for using interrupts to keep the human in the loop.”

To implement this in Genkit, you define your tools as interrupts so that the LLM can pause the response to gather data from the user:

const choiceInterrupt = ai.defineInterrupt(
  {
    name: 'choice',
    description: 'Asks the user a question with a list of choices',
    inputSchema: z.object({
      question: z.string().describe("The model's follow-up question."),
      choices: z.array(z.string()).describe("The list of choices."),
    }),
    outputSchema: z.string().describe("The user's choice."),
  });

In Genkit, an endpoint that you can call from your Flutter app via HTTP is called a flow”:

const greenThumb = ai.defineFlow(
  {
    name: "greenThumb",
    ...
  },
  async ({ prompt, messages, resume }) => {
    const response = await ai.generate({
      ...(messages && messages.length > 0 ? {} : { system: gtSystem }),
      prompt,
      tools: [choiceInterrupt, ...],
      messages,
      resume,
    });

    return { messages: response.messages };
  });

Notice that the greenThumb flow takes a set of messages and returns a set of messages. To kick things off in your Flutter code, you pass an empty list of messages. If the last message in the LLM response is an interrupt, it will include the info you need to show a UI to ask the user to answer the LLMs question:

{
  "messages": [
    {
      "role": "system",
      "content": [
        {
          "text": "\n  You're an expert gardener. The user will ask a question about how to manage\n  their plants in their garden. Be helpful and ask 3 to 5 clarifying questions,\n  using the choiceInterrupt tool.\n  \n  When you're done asking questions, provide a description of a product or\n  products that will help the user with their original query. Each product\n  description should NOT include another question for the user nor should it\n  include the name of any specific product.\n"
        }
      ]
    },
    {
      "role": "user",
      "content": [
        {
          "text": "I'd like to expand my garden."
        }
      ]
    },
    {
      "role": "model",
      "content": [
        {
          "text": "That's great! To give you the best advice, I need a little more information.\n\n1.  What kind of plants are you planning to add to your garden (e.g., vegetables, flowers, shrubs)?\n2.  What is the approximate size of the area you're planning to expand?\n3.  What type of soil do you have in your garden currently (e.g., sandy, clay, loamy)?\n4.  What is the average amount of sunlight your garden receives each day?\n\n"
        },
        {
          "toolRequest": {
            "name": "choiceInterrupt",
            "input": {
              "choices": [
                "Vegetables",
                "Flowers",
                "Shrubs",
                "Other"
              ],
              "question": "What kind of plants are you planning to add to your garden?"
            }
          },
          "metadata": {
            "interrupt": true
          }
        }
      ]
    }
  ]
}

This continues until the LLM gathers the data it needs to provide a response:

{
  "messages": [
    ...
    {
      "role": "model",
      "content": [
        {
          "text": "Based on your answers, here are a couple of product descriptions that might be helpful for expanding your flower garden:\n\n**Product 1:** This soil amendment is designed to improve drainage and aeration in existing soil, while also providing essential nutrients for healthy root development. It's perfect for preparing new garden beds or revitalizing existing ones. Simply mix it into the top layer of soil before planting to create an ideal environment for flowers to thrive.\n\n**Product 2:** These granular plant food spikes provide a slow-release source of nutrients for flowering plants. They are easy to use - simply insert them into the soil around your existing plants, and they will gradually release nutrients over an extended period, promoting vibrant blooms and strong growth. They're especially beneficial for areas with full sun exposure, as they help plants maintain consistent access to the resources they need.\n"
        }
      ]
    }
  ]
}

Of course, there are a lot of picky details to get this right, so I’ll refer you my flutter_fixit_warehouse sample, which provides all of the code and a more thorough explanation.

Where are we?

With the latest updates to the Flutter AI Toolkit, you can now build tool calls into your Flutter agentic apps. Furthermore, with Genkit’s support for interrupts, you can keep the human in the loop by handling server-side tool calls with input from your Flutter app users.

Now, with these new tools in your toolbox, go forth and build!

May 6, 2025 flutter ai

Pedantic AI in Dart: dartantic_ai

Pedantic AI in Dart: dartantic_ai

The Python community has a library called pydantic that adds type checking at run-time to a dynamically typed language. The library allows them to be pedantic” about type validation in Python aka pydantic; get it? : )

We don’t need that for Dart. We have static type checking and it’s wonderful.

Pedantic AI in Python: pydantic-ai

On top of pydantic, the Python community has built pydantic-ai, which makes it easy for you to specify typed output from your LLM requests and to describe typed access to your tools. For example:

# Python example with support for multiple models
import os

from pydantic import BaseModel
from pydantic_ai import Agent

class TownAndCountry(BaseModel):
    town: str
    country: str

model = 'openai:gpt-4o' # or 'google-gla:gemini-2.0-flash' or ...
print(f'Using model: {model}')
agent = Agent(model, output_type=TownAndCountry)

if __name__ == '__main__':
    result = await agent.run('The windy city in the US of A.')
    print(result.output) // Output: town='Chicago' country='United States'

Check out the definition of the TownAndCountry type and the use of it when creating an Agent object with the output_type parameter. That’s all you need to get an instance of TownAndCountry populated by the LLM based on the prompt.

Now that’s something we don’t have in Dart! Instead, we have to do something like this:

// Dart example for Gemini only
void main() async {
  final model = gemini.GenerativeModel(
    apiKey: Platform.environment['GEMINI_API_KEY']!,
    model: 'gemini-2.0-flash',
    generationConfig: gemini.GenerationConfig(
      responseMimeType: 'application/json',
      responseSchema: gemini.Schema.object(
        properties: {
          'town': gemini.Schema.string(),
          'country': gemini.Schema.string(),
        },
        requiredProperties: ['town', 'country'],
      ),
    ),
  );

  final result = await model.generateContent([
    gemini.Content.text('The windy city of the US of A.'),
  ]);

  final json = jsonDecode(result.text!);
  final obj = TownAndCountry.fromJson(json);
  print(obj); // Output: TownAndCountry(town: Chicago, country: United States)
}

Plus, while the above code works for the Gemini SDK for Dart, if I want to do the same thing using the OpenAI SDK for Dart, I have to write very different code:

// Dart example for OpenAI only
void main() async {
  final client = openai.OpenAIClient(
    apiKey: Platform.environment['OPENAI_API_KEY'],
  );

  final response = await client.createChatCompletion(
    request: const openai.CreateChatCompletionRequest(
      model: openai.ChatCompletionModel.modelId('gpt-4o'),
      responseFormat: openai.ResponseFormat.jsonObject(),
      messages: [
        openai.ChatCompletionMessage.system(
          content:
              'Respond ONLY with JSON containing keys "town" and "country".',
        ),
        openai.ChatCompletionMessage.user(
          content: openai.ChatCompletionUserMessageContent.string(
            'The windy city of the US of A.',
          ),
        ),
      ],
    ),
  );

  final data =
      jsonDecode(response.choices.first.message.content!)
          as Map<String, dynamic>;

  final result = TownAndCountry.fromJson(data);
  print(result); // Output: TownAndCountry(town: Chicago, country: United States)
}

There must be a better way!

A Better Way: dartantic_ai

I was inspired by pydantic-ai for two main features:

  1. An easy way to go between models using just a string descriptor, e.g. openai:gpt-4o
  2. A common way to provide type information for output and tool calls, i.e. JSON schema

Those are the features I focused on initially for dartantic_ai, allowing you to write code like the following:

// Dart example with support for multiple models
class TownAndCountry {
  TownAndCountry({required this.town, required this.country});
  final String town;
  final String country;  
  
  factory TownAndCountry.fromJson(Map<String, dynamic> json) => TownAndCountry(
      town: json['town'],
      country: json['country'],
    );
  
  static Map<String, dynamic> get schemaMap => {
    'type': 'object',
    'properties': {
      'town': {'type': 'string'},
      'country': {'type': 'string'},
    },
    'required': ['town', 'country'],
    'additionalProperties': false,
  };
  
  @override
  String toString() => 'TownAndCountry(town: $town, country: $country)';
}

void main() async {
  final agent = Agent(
    model: 'openai:gpt-4o', // or 'google:gemini-2.0-flash' or ...
    outputType: TownAndCountry.schemaMap,
  );

  final result = await agent.run('The windy city in the US of A.');
  final obj = TownAndCountry.fromJson(jsonDecode(result.output));
  print(obj); // Output: TownAndCountry(town: Chicago, country: United States)
}

Here we’ve created a class to hold the typed output from the agent, passing in hand-written JSON schema and JSON decoder functions. Already, this is much simpler code than either of the Gemini or the OpenAI samples and it works either family of models by simply changing the model description string.

Further, with a little bit of Dart builder magic, you can use json_serializable and soti_schema to generate the JSON serialization and JSON schema for you:

// Automatic JSON decoding and schema generation
@SotiSchema()
@JsonSerializable()
class TownAndCountry {
  TownAndCountry({required this.town, required this.country});

  factory TownAndCountry.fromJson(Map<String, dynamic> json) =>
      _$TownAndCountryFromJson(json);

  final String town;
  final String country;

  Map<String, dynamic> toJson() => _$TownAndCountryToJson(this);

  @jsonSchema
  static Map<String, dynamic> get schemaMap => _$TownAndCountrySchemaMap;

  @override
  String toString() => 'TownAndCountry(town: $town, country: $country)';
}

void main() async {
  final agent = Agent(
    model: 'openai:gpt-4o'
    outputType: TownAndCountry.schemaMap,
    outputFromJson: TownAndCountry.fromJson,
  );

  final result = await agent.runFor<TownAndCountry>(
    'The windy city in the US of A.',
  );

  print(result.output); // Output: TownAndCountry(town: Chicago, country: United States)
}

Using the builder, we no longer have to write the JSON serialization code or the JSON schema by hand — json_serialization and soti_schema handle that. And, for fun, we’re calling the runFor<T> method so that the output you get is typed w/o you having to manually call jsonDecode. Magic!

Potential Future

Right now, we’re in phrase 1” of dartantic_ai development — building out the core set of features and providers that work with those features (starting with Gemini and OpenAI). That’s what the code samples above are all about — what’s the best developer experience we can provide for a working Dart developer adding generative AI to their apps?

Once there’s a solid foundation, we can start experimenting with a builder that would allow you to write even simpler code:

@Agent()
class TacAgent extends Agent {
  TacAgent(super.model);

  @Run()
  Future<TownAndCountry> run(String prompt) => _$TownAndCountryAgentRun(prompt);
}

void main() async {
  final result = await TacAgent('openai:gpt-4o').run('The windy city of the US of A.');
  print(result.output); // Output: TownAndCountry(town: Chicago, country: United States)
}

And this is just the beginning. Today, dartantic supports tool calls, which you define with JSON schema in a way that’s similar to typed output from a run call. Now imagine being able to put a @Tool attribute on a method in your agent class and have the tool passed in automatically for you. There are all kinds of possibilities as soon as builders are involved.

Call for Contributors

As of the writing of this post, I’ve just started my dartantic_ai journey with a list of current and pending features you can read about on pub.dev. I only support the smallest amount of the Gemini and OpenAI SDK surface area to implement the initial features that are most important to me.

However, pydantic-ai has a big surface area with lots of great stuff for using LLMs in a type-safe, multi-model way that the Dart community would be interested in, including multi-agent support, agent graphs, multi-media support, streaming, etc. I’m going to need help to cover all of that, let alone making it work in a robust, matrix-tested way that can appeal to a growing community of Dart developers dipping their toes into AI.

Is dartantic_ai a style of interacting with LLMs from Dart that appeals to you? Are there features missing that you want or bugs you’ve found putting it to use? Then have I got a deal for you! Please contribute issues and PRs and let’s get this show on the road!

April 24, 2025 flutter ai

AI Agent with Dart + Gemini

AI Agent with Dart + Gemini

To say that there has been a lot of activity in the AI space for developers lately would be an understatement. As we transition from Ask” mode in our AI-based dev tooling to Agent” mode, it’s easy to see agents as something magical.

Any sufficiently advanced technology is indistinguishable from magic.” –A. C. Clarke

And while the vendors of AI-agent-based tooling might like you to think of their products as PFM, as Thorsten Ball points out in his blog post, How to Build an Agent or: The Emperor Has No Clothes, AI agents are not as magical as they appear. He then demonstrates that fact by implementing an AI agent using Go and Claude right before your eyes. I highly recommend reading it — Thorsten tells a gripping tale of AI and code. By the end, he’s pulled back the curtain on AI agents and made it quite clear that this technology is within anyone’s reach.

AI Agent in Dart

Combine Thor’s post with the recent Building Agentic Apps campaign announced by the Flutter team and I just couldn’t help myself from doing a bit of vibe coding to produce the Dart and Gemini version:

import 'dart:io';

import 'package:google_generative_ai/google_generative_ai.dart';

Future<void> main() async {
  final apiKey = Platform.environment['GEMINI_API_KEY'];
  if (apiKey == null) {
    stderr.writeln('Please set the GEMINI_API_KEY environment variable.');
    exit(1);
  }

  final model = GenerativeModel(
    // model: 'gemini-2.0-flash',
    // model: 'gemini-2.5-flash-preview-04-17',
    model: 'gemini-2.5-pro-preview-03-25',
    apiKey: apiKey,
    tools: [
      Tool(
        functionDeclarations: [
          FunctionDeclaration(
            'read_file',
            'Read the contents of a file at a relative path.',
            Schema(
              SchemaType.object,
              properties: {'path': Schema(SchemaType.string)},
            ),
          ),
          FunctionDeclaration(
            'list_files',
            'List all files in a given directory.',
            Schema(
              SchemaType.object,
              properties: {'dir': Schema(SchemaType.string)},
            ),
          ),
          FunctionDeclaration(
            'edit_file',
            'Overwrite the contents of a file with new content.',
            Schema(
              SchemaType.object,
              properties: {
                'path': Schema(SchemaType.string),
                'replace': Schema(SchemaType.string),
              },
            ),
          ),
        ],
      ),
    ],
  );

  final chat = model.startChat();

  print('Gemini 2.0 Flash Agent is running. Type "exit" to quit.');
  while (true) {
    stdout.write('\x1B[94mYou\x1B[0m: ');
    final input = stdin.readLineSync();
    if (input == null || input.toLowerCase() == 'exit') break;

    final response = await chat.sendMessage(Content.text(input));

    final text = response.text?.trim();
    if (text != null && text.isNotEmpty) {
      print('\x1B[93mGemini\x1B[0m: $text');
    }

    final functionResponses = <Content>[];
    for (final candidate in response.candidates) {
      for (final part in candidate.content.parts) {
        if (part is FunctionCall) {
          final result = await handleToolCall(part);
          print('\x1B[92mTool\x1B[0m: ${part.name}(${part.args})');
          functionResponses.add(
            Content.functionResponse(part.name, {'result': result}),
          );
        }
      }
    }

    if (functionResponses.isNotEmpty) {
      final response = await chat.sendMessage(
        Content(
          '',
          functionResponses.map((c) => c.parts).expand((p) => p).toList(),
        ),
      );
      if (response.text != null) {
        print('\x1B[93mGemini\x1B[0m: ${response.text}');
      }
    }
  }
}

Future<String> handleToolCall(FunctionCall call) async {
  final args = call.args;
  try {
    switch (call.name) {
      case 'read_file':
        return await readFile(args['path'] as String);
      case 'list_files':
        return await listFiles(args['dir'] as String? ?? '.');
      case 'edit_file':
        return await editFile(
          args['path'] as String,
          args['replace'] as String,
        );
      default:
        final err = 'Unknown tool: ${call.name}';
        print(err);
        return err;
    }
  } catch (e) {
    final err = 'Error executing ${call.name}: $e';
    print(err);
    return err;
  }
}

Future<String> readFile(String path) async {
  final file = File(path);
  if (!await file.exists()) return 'File not found: $path';
  return await file.readAsString();
}

Future<String> listFiles(String dirPath) async {
  final dir = Directory(dirPath);
  if (!await dir.exists()) return 'Directory not found: $dirPath';
  final entries = await dir.list().toList();
  return entries.map((e) => e.path).join('\n');
}

Future<String> editFile(String path, String content) async {
  final file = File(path);
  await file.writeAsString(content);
  return 'File $path updated successfully.';
}

The full project is available on GitHub, but everything of interest is right here, with code that does the following:

  1. Connects to the Gemini API using an API key.
  2. Defines tools for file operations (read_file, list_files, edit_file).
  3. Enters a loop to chat with the user via the console.
  4. Sends user input to the Gemini model.
  5. Handles function calls requested by the model by executing corresponding local Dart functions for file system interaction.
  6. Sends the results back to the model and prints the final response.

To demonstrate functional parity with Thorsten’s Go implementation, the following are some examples.

Example: horsing around

Here’s an example of context carrying on from one prompt to the next:

$ dart bin/main.dart

Gemini 2.0 Flash Agent is running. Type "exit" to quit.
You: Hey! My name is Chris! How are you?
Gemini: Hi Chris! I'm doing well, thank you for asking!

How can I help you today?
You: I recently purchased a horse and I'm looking for some playful horse-related names based on my own name. Can you give me a hand with that?
Gemini: That's a fun idea, Chris! Let's brainstorm some playful horse names related to your name:

1.  **Chris-Cross Gallop:** Playful, suggests movement and maybe jumping.
2.  **Krispy Kremeigh:** A silly pun combining a treat with a horse sound.
3.  **Sir Chris-top-hoof:** A more elaborate, knightly-sounding pun, especially if your full name is Christopher.
4.  **Chriscendo:** Like a musical crescendo, suggesting growing speed or presence.
5.  **Chrispy Trot:** Simple, catchy, and combines "Chris" with a horse gait.
6.  **Christo-fur:** If your horse is particularly furry or fluffy!
7.  **Captain Chris:** Simple, classic, gives a sense of leadership.
8.  **Whinny Chris:** Combines a horse sound with your name.
9.  **Topher Trot:** If your full name is Christopher, using the shorter form.
10. **Chrispy Comet:** Suggests speed and energy.

Hopefully, one of these sparks some inspiration! Do any of these sound like a good fit for your new horse's personality? Or would you like some more ideas?
You: exit

Notice that Gemini remembers my name.

Example: read_file tool

Here’s an example of the tool that reads files:

You: Gemini, buddy, help me solve a riddle in the secret-file.txt file.
Gemini: Okay, buddy! I can help with that. First, let's see what the riddle is.
Tool: read_file({path: secret-file.txt})
Gemini: Okay, the riddle in `secret-file.txt` is:

"what animal is the most disagreeable because it always says neigh?"

The answer is a **horse**!

Do you need help with anything else, buddy?
You: what's going on in bin/main.dart? be brief!
Tool: read_file({path: bin/main.dart})
Gemini: It's a Dart command-line application that:
1.  Connects to the Gemini API using an API key.
2.  Defines tools for file operations (`read_file`, `list_files`, `edit_file`).
3.  Enters a loop to chat with the user via the console.
4.  Sends user input to the Gemini model.
5.  Handles function calls requested by the model by executing corresponding local Dart functions for file system interaction.
6.  Sends the results back to the model and prints the final response.

It’s really the independent use of tools that makes an agent an agent — I tell it about the riddle in secret-file.txt but I don’t tell it use a tool. It has the tool — because I told it so in the system instructions — and it uses it as it sees fit.

Example: list_files tool

Here the agent is using another tool that knows how to list files in a folder, which it uses multiple times and in conjunction with the tool to read files.

You: what do you see in this directory?
Tool: list_files({})
Gemini: Okay, I see the following files and directories in the current directory:

*   `.DS_Store` (macOS metadata file)
*   `bin` (directory, likely containing executable scripts)
*   `CHANGELOG.md` (Markdown file)
*   `pubspec.lock` (Dart package lock file)
*   `secret-file.txt` (Text file)
*   `README.md` (Markdown file)
*   `pubspec.yaml` (Dart package configuration file)
*   `.gitignore` (Git configuration file)
*   `analysis_options.yaml` (Dart analysis configuration file)
*   `.dart_tool` (Dart build/cache directory)
*   `.git` (Git repository directory)
*   `.vscode` (VS Code editor configuration directory)

You: tell me about all of the Dart and Dart-project-related files in this directory and in the bin directory
Tool: list_files({})
Tool: list_files({dir: bin})
Gemini: Okay, I can list the files in the current directory and the `bin` directory and then describe the Dart-related ones.

Based on the file listing:

**In the `bin` directory:**

*   `bin/main.dart`: This is a Dart source code file, likely the main entry point for an executable application.

**In the current directory (`./`):**

*   `pubspec.yaml`: This is the core Dart project definition file. It defines the project's name, description, dependencies, and other metadata.
*   `pubspec.lock`: This file locks the specific versions of the dependencies listed in `pubspec.yaml` to ensure reproducible builds. It's automatically generated or updated when you run `dart pub get`.
*   `analysis_options.yaml`: This file configures the Dart static analyzer and linter rules for the project.
*   `.dart_tool/`: This directory is created and managed by the Dart/Flutter tools. It contains build artifacts, package configuration files (`package_config.json`), and other temporary files needed for the project. You usually don't edit files inside it directly.
*   `.gitignore`: While not strictly a Dart file, it often contains entries specific to Dart projects to prevent generated files (like `.dart_tool/`, `build/`, `.packages`, `pubspec.lock` - although `pubspec.lock` is often committed) from being checked into version control.
*   `bin/`: This directory itself, which we already examined, typically contains the executable Dart scripts for a project.

You: what dart sdk version are we using for this project?
Gemini: Okay, the Dart SDK version constraint for the project is defined in the `pubspec.yaml` file. I'll read that file to find the constraint.
Tool: read_file({path: pubspec.yaml})
Gemini: Based on the `pubspec.yaml` file, the Dart SDK version constraint for this project is:

environment:
  sdk: ^3.7.2

This means the project requires a Dart SDK version that is compatible with `3.7.2`. Specifically, it allows any version from `3.7.2` up to (but not including) `4.0.0`.

Again, the prompt isn’t explicit about what tool(s) to use — it decides itself based on what task it’s trying to accomplish.

Example: edit_file tool

One more tool — the ability to edit files:

$ dart bin/main.dart

Gemini 2.0 Flash Agent is running. Type "exit" to quit.
You: hey, gemini, create fizzbuzz.js that I can run with nodejs and that has fizzbuzz in it and executes it
Tool: edit_file({path: fizzbuzz.js, replace: function fizzBuzz(n) {
  for (let i = 1; i <= n; i++) {
    if (i % 3 === 0 && i % 5 === 0) {
      console.log("FizzBuzz");
    } else if (i % 3 === 0) {
      console.log("Fizz");
    } else if (i % 5 === 0) {
      console.log("Buzz");
    } else {
      console.log(i);
    }
  }
}

// Execute the function up to 100
fizzBuzz(100);
})
Gemini: OK. I've created the `fizzbuzz.js` file with the FizzBuzz logic. It will print FizzBuzz up to 100 when you run it using Node.js (e.g., `node fizzbuzz.js`).
You: exit

$ node fizzbuzz.js
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz
16
17
Fizz
19
Buzz
Fizz
22
23
Fizz
Buzz
26
Fizz
28
29
FizzBuzz
31
32
Fizz
34
Buzz
Fizz
37
38
Fizz
Buzz
41
Fizz
43
44
FizzBuzz
46
47
Fizz
49
Buzz
Fizz
52
53
Fizz
Buzz
56
Fizz
58
59
FizzBuzz
61
62
Fizz
64
Buzz
Fizz
67
68
Fizz
Buzz
71
Fizz
73
74
FizzBuzz
76
77
Fizz
79
Buzz
Fizz
82
83
Fizz
Buzz
86
Fizz
88
89
FizzBuzz
91
92
Fizz
94
Buzz
Fizz
97
98
Fizz
Buzz

$ dart bin/main.dart

Gemini 2.0 Flash Agent is running. Type "exit" to quit.
You: please edit fizzbuzz.js so that it only prints until 15
Gemini: Okay, I can help with that. First, I need to read the current content of `fizzbuzz.js` to see how it's written.
Tool: read_file({path: fizzbuzz.js})
Gemini: Okay, I see the content of `fizzbuzz.js`. It defines a function `fizzBuzz(n)` and then calls it with `fizzBuzz(100);`.

To make it print only until 15, I will change the last line from `fizzBuzz(100);` to `fizzBuzz(15);`.

Here's the code to edit the file:

You: go ahead
Tool: edit_file({path: fizzbuzz.js, replace: function fizzBuzz(n) {
  for (let i = 1; i <= n; i++) {
    if (i % 3 === 0 && i % 5 === 0) {
      console.log("FizzBuzz");
    } else if (i % 3 === 0) {
      console.log("Fizz");
    } else if (i % 5 === 0) {
      console.log("Buzz");
    } else {
      console.log(i);
    }
  }
}

// Execute the function up to 15
fizzBuzz(15);
})
Gemini: OK. I have edited the `fizzbuzz.js` file to change the function call from `fizzBuzz(100)` to `fizzBuzz(15)`. The script will now only print the FizzBuzz sequence up to 15 when executed.
You: exit

$ node fizzbuzz.js
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz

Not only can the agent list, read and write files all on its own, it can update existing files. And it can do so while producing code that actually works (which is more than many interview candidates can do with fizzbuzz in my experience).

Congrats!

And finally:

$ dart bin/main.dart

Gemini 2.0 Flash Agent is running. Type "exit" to quit.
You: Create a congrats.js script that rot13-decodes the following string 'Pbatenghyngvbaf ba ohvyqvat n pbqr-rqvgvat ntrag!' and prints it
Tool: edit_file({replace: function rot13(str) {
  return str.replace(/[a-zA-Z]/g, function(char) {
    const charCode = char.charCodeAt(0);
    // Check case: uppercase A-Z (65-90), lowercase a-z (97-122)
    const base = charCode < 97 ? 65 : 97;
    // Apply ROT13 shift
    return String.fromCharCode(base + (charCode - base + 13) % 26);
  });
}

const encodedString = 'Pbatenghyngvbaf ba ohvyqvat n pbqr-rqvgvat ntrag!';
const decodedString = rot13(encodedString);
console.log(decodedString);
, path: congrats.js})
You: exit

$ node congrats.js
Congratulations on building a code-editing agent!

At this point, it should be clear that building an AI agent in Dart using Gemini is certainly not magic. In fact, it’s not even hard — the whole thing is less than 140 LOC. This sample is obviously not something I’d recommend shipping, since there are no guardrails keeping the AI from seeing and changing every file in your file system. It does make a good place to start when building agentic apps for Dart and Flutter, however. Enjoy!

April 9, 2025 flutter ai

Building Generative AI for DartPad

Building Generative AI for DartPad

Hello, again, and welcome to another installment of Flutter + AI = Joy.” In today’s episode, we’re taking a behind-the-scenes look at the design and implementation of the generative AI features in the latest version of DartPad. Before we get started, if you haven’t already read Amanda’s most excellent blog post for an overview of the new functionality, I recommend starting there.

Streaming Responses

The first thing I needed to do to enable generative AI in DartPad was to add support for streaming responses. Even as fast as Gemini 2.0 Flash is, if you have to wait for the complete code for anything beyond Dart hello, world”, you’re gonna get antsy.

The backend service for DartPad is built on top of shelf, the package that provided server-wide support for Dart before it was cool. Shelf supports streaming, but the docs aren’t exactly available on the topic. Also, streaming hasn’t been used in DartPad before, so it was a bit of an experiment. Ultimately I turned gzip and I/O buffering off for the code-gen API endpoints and streamed the generated code back as UTF8-encoded bytes, which the client is expected to decode.

And this worked great — the server sent data back in chunks of bytes and the client decoded them back into a string, updating the UI for each chunk that it received. Except that the client only got one chunk with the complete response for every request. And this was true even though the server was sending back multiple chunks as Gemini provided them. So what was the problem?

It took a ridiculously long time (days!) to figure out that the Dart http package on the web was using XMLHttpRequest, which collapsed streaming responses into a single response, killing any chance to provide progress updates. With some hacking around, I figured out that the fetch API did the right thing, so the http package needed an update. I discovered this in February of 2025. The good news is that the Dart team had already done that work in November of 2024 and that the PR was pending! Once that PR landed, we were good to go.

Error Handling

My initial design proposal called for adding a Gemini menu to DartPad with New, Update, Fix and Image to Code functionality:

Image to Code was bundled together with Dart/Flutter Snippet (New) and Update code (Update) via the ability to attach images. Bringing up a dialog to enter a prompt made sense for New and Update, since DartPad doesn’t know what kind of code you want to generate or what updates you want to make.

For Fix, however, it was annoying to have to tell DartPad what the error was, since the analyzer was reporting the errors to me! So I hijacked the analyzer error message UI with the idea of building the prompt to suggest a fix for the user. The result is that now there’s a lightbulb to indicate analyzer messages and to provide an easy way for the user to trigger the menu of potential fixes. Right next to that, I added a Gemini sparkle icon:

Clicking on the sparkle bundles up the error message automatically, asks Gemini for a fix and provides you a diff:

That’s just magic! Once I had it working for analyzer errors, I needed it for run-time errors, too, so I added the Gemini sparkle to the console output window.

When you press on the sparkle icon in this case, DartPad will bundle up your run-time error and suggest a fix.

Unfortunately, there was some work to enable the magic for run-time errors. Previously, there had been no reason to distinguish between normal console output and error output. That meant there was no good way to decide when to show the blue sparkle. However, you certainly do not want to show the Suggest Fix button when a Dart app is printing the last 10 numbers of pi. Luckily, John Ryan, Flutter DevRel and engineering lead for DartPad, came to the rescue with a fix that allowed me to reliably show the blue sparkle only when it was needed.

UX Shortcuts

After a long time on Unix before Windows and a long time on Windows before Mac, I’ve become a keyboard guy. I want to know all of the keyboard shortcuts so I can avoid using the mouse. While building and testing DartPad, I spent a lot of time in the prompt and code generating dialogs, both of which require you to press the Accept button. So I was doing that a lot. This annoyed me, so I added a keyboard shortcut:

  • Ctrl+Enter (Cmd+Enter on macOS) will trigger the Accept action

And because I’m super lazy:

  • Accepting the generated code will trigger the Run action

  • Or if hot reload is enabled, the Reload action will be triggered instead

I added all of this simply because I couldn’t figure out a case when that isn’t what you wanted to happen. This means that you can enter your prompt, press Ctrl/Cmd+Enter once to generate the code, then again to accept it and it will automatically be run/reloaded for you. No muss, no fuss. No [mouses](https://en.wikipedia.org/wiki/Computer_mouse#:~:text=A computer mouse (plural mice,motion relative to a surface.) harmed in the creation of this feature.

Necessity is not the mother of invention; laziness is.” –J. Michael Sells (my dad)

Future Hopes & Dreams

The initial goal for adding generative AI features to DartPad was to do so with a simple one-and-done style prompt in a modal dialog instead of the multiple prompts of a chat-style UI. Plus, by adding the new functionality without interfering with any of the existing DartPad UI, we could test it first to see if anyone cared.

It’s already apparent that you care. And that you really don’t like the modal dialogs. Instead, you want the prompt and iterate style of a chat interface (aka vibe coding). Toward that end, personally I’d like to see DartPad move towards something like this in the future:

What do you think? How would you like DartPad to work wrt generative AI? Please drop your thoughts below!

April 2, 2025 flutter ai

Building the Flutter AI Toolkit

Building the Flutter AI Toolkit

It has been quite an exciting few years since I left Google in 2022. I had been on the Flutter team for 3 years by that point, helping it go from v1.0 to v3.0, from hundreds of thousands of users to millions. It was exciting, but time for a change. And boy did I get some! Over the next 3 years I did the following:

  • Worked on the developer experience for VR/AR/XR at Meta in their Reality Labs division.
  • Helped to ship the v1.0 for both the consumer and the enterprise versions of Cody from Sourcegraph, a long-time start-up with amazing code search capabilities. Seriously, if you haven’t tried Sourcegraph Code Search, you should. And if your company is using Code Search, their AI coding assistant Cody integrates with the same code. Give em both a try!
  • Retired from full-time employment after a 35-year software engineering career (and more than that if you count when I started writing code on my Apple ][+ at the age of 14).
  • Put up my shingle as an independent consultant specializing in applied AI.
  • Built the Flutter AI Toolkit.
  • Started giving Flutter talks again with Dart and Flutter and AI, Oh My! at FlutterCon in 2024 (you click the link to watch the talk).
  • Built the generative AI features in the latest version of DartPad.
  • Have built some other AI stuff too that hasn’t shipped yet. Stay tuned!

Even before I left the Flutter team, I was spending my spare time digging into AI. When I was at Meta, I was focused on xR, but again, spending my spare time on AI. And then ChatGPT flipped the table, so when the opportunity to work on AI in the developer tooling space came up, I couldn’t say no.”

In June of last year, I decided that I wanted to control my own time again, so I retired.” That’s not to say that I wanted to stop the work — I love what I do — it’s just that my pesky job was taking up all my time. As the VP of Product, I had all kinds of influence on what other folks did but no time to get my own hands dirty! Once I retired, of course, I wanted to spend time with Flutter; it’s got the best API, the best team and the best community in tech, and I’ve seen more than my share. But now I had a new focus: applied generative AI.

The first thing I did in this new area was work with the Flutter and Firebase teams on the creation and application of the Flutter AI Toolkit.

Hello, Flutter AI Toolkit

The goal of the Flutter AI Toolkit (or just AIT from now on) is that it provides an LLM chat widget that you can easily plug into your existing Flutter apps. The Flutter team shipped the initial version in December, 2024, which you can read all about in Amanda’s most excellent blog post on the topic. Since then, I’ve been taking user feedback to add new features and fix issues. The current version as I write this is 0.6.8 but I’m going to be shipping a 0.7.0 soon.

To get started, I’ll refer you to the flutter_ai_toolkit package README. Once you’ve done that, a minimal chat app in Flutter using Gemini 2.0 is as easy as this:

import 'package:flutter/material.dart';
import 'package:flutter_ai_toolkit/flutter_ai_toolkit.dart';
import 'package:google_generative_ai/google_generative_ai.dart';

void main() => runApp(_MainApp());

class _MainApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) => MaterialApp(
    home: Scaffold(
      body: LlmChatView(
        provider: GeminiProvider(
          model: GenerativeModel(
            model: 'gemini-2.0-flash',
            apiKey: 'GEMINI-API-KEY',
          ),
        ),
      ),
    ),
  );
}

There’s not much to this code, but it brings up some interesting points anyway:

  • The name of the chat widget is LlmChatView
  • It supports many options, but only one is required — a provider that connects your LLM to the features needed by the chat widget
  • There are two LLM providers that come out of the box: GeminiProvider and VertexProvider
  • These two providers both take a GenerativeModel instance provided by the Google AI Dart SDK and Vertex AI in Firebase Flutter packages respectively, enabling you to configure them fully as you choose.
  • The separation of the chat widget from the communication channel with the LLM allows you to plug in any LLM-like service you’d like.

If you run this code, you’ll get a very simple, but fully functional, cross-platform AI chat app. A slightly fancier version is hosted online that you can try right now. It looks like this by default:

Notice the suggestions and the welcome message, both of which you can specify as optional parameters to LlmChatView. Also notice the + button, which allows the user to upload an image or a file, and the mic button, which allows the user to provide voice input.

Custom Styling

Beyond the basic look n’ feel, the demo example also provides some styling options:

Out of the box, the Flutter AI Toolkit supports being hosted in MaterialApp or CupertinoApp. However, as of the pre-release status at the time of this writing, it does not automatically adapt to the current theme provided by either. Instead, the look n’ feel was designed by a Google UX designer to have its own modern style. And if you’d like to tailor it to your app, you have complete control (as you can see above). Check out the dark_mode and custom_styles examples for the details.

When I went to implement styling support, I took my inspiration for the styling support from the flutter_markdown package. Theming is great for when you’d like to have a lot of a widget in your app, e.g. you don’t want to set the styles for each Text widget — you want to use a theme for that. However, I figured that you’d have about one LlmChatView widget, so theming didn’t make as much sense as just passing in a style object like you can do with the Markdown widget.

That said, it’s an obviously missing feature to be able to add the ability to pick up on the current Material or Cupertino theme and set those as defaults for an LlmChatWidget. If this is something you’re passionate about, PRs are gladly considered!

Custom Response Widgets: Beyond Text

By default, the AIT shows LLM responses as Markdown:

But what if you want the AI to respond with something richer — like a card UI, a chart or a recipe? As an example, consider a recipes app that allows the user to maintain a database and to add to it via a conversation with an LLM. To be able to get the data out of the LLM to add to the database, you don’t want to scrape it out of Markdown — you want well-formed data that you can parse reliably. You can get that by configuring your model to output JSON. By default, that JSON would look like this when shown to the user:

Of course, this isn’t particularly useful for the user, so the AIT allows you to show your own Flutter widget, as the recipe example does:

In addition to showing the recipe in a way the user can read, the example also provides an Add Recipe button to allow the user to easily drop the recipe into their database, all from the comfort of their AI chat.

Pluggable LLMs

As big of a fan as I am of the Gemini models (2.5 pro experimental is blowing the doors off as I write this), different apps have different needs. Since I had to make LLMs pluggable to support Gemini and Vertex (both of which have their own API over the Gemini LLMs), I thought it would be useful to support the ability to talk to any LLM-like backend. And the community has responded with the flutter_ai_providers package. Together with what comes out of the box and what is provided by the community, the AIT supports the following models:

  • Google Gemini 1.x and 2.x, flash and pro, stable and experimental
  • OpenAI GPT and o series of models
  • Anthropic Claude series of models
  • 100s of locally hosted Ollama models, including Google Gemma3, QWQ, Deepseek, Llama3.x, etc.

If that isn’t enough for you, you can implement the LlmProvider interface and plug in anything you like. For debugging purposes, I built EchoProvider but my favorite use of this feature is hooking up the LlmChatView to flows that I define in Firebase Genkit.

Check out the AI Toolkit docs for more details, but it’s a pretty simple interface. Even the Gemini and Vertex AI providers are only ~150 LOC each.

Can Haz More?

I’ve just touched on some of the big pieces of the AIT; for the full details, I recommend the AI Toolkit docs. In addition, there are extensive feature-specific samples on GitHub.

For something a bit more end-to-end, I’ve also built out a full chat app using the AIT:

Notice the list of chats on the left. Those are stored in Firestore and the app provides full management of the history and name of each chat, so the user can rename them, delete them and create new ones. There’s even a little magic that automatically names each chat based on the first question the user answers, just for fun.

In addition, there’s a more fully-featured version of the recipe sample using Firebase for the storage of recipes as well as the Vertex AI LLM.

If you’re interested in other Flutter and Firebase AI samples, I’ll point you at the Quickdraw and Planning samples:

Where Are We?

I was not alone in building the AIT. In addition to the contributors on the repo (thank you, contributors!), I want to give a special shout-out to Rory Blyth. He’s been my friend for decades now and been working on generative AI since the GPT-1 era. He did a bunch of advance work on the mic input and the recipe sample as well as being my build buddy through the whole thing. Thanks, Rory!

It is amazing to me how much — and how quickly! — AI is changing the development tooling landscape. Hopefully you’ll find that the Flutter AI Toolkit makes it easy for you to integrate LLM chats into your Flutter apps. Customize the chat interface, display rich responses beyond text, and leverage community-driven providers for a myriad of LLM options. Dive into the docs, explore the samples, and don’t hesitate to contribute or report issues. Or just drop me a line; I’m super interested to see what you’re building!

September 30, 2024 flutter

Dart and Flutter packages need your Love!

Dart and Flutter packages need your Love!

Do you have some free time? Are you excited to get involved in the Dart and Flutter ecosystem? Then have I got a deal for you!

Last week, the Founder and CEO of Shorebird, who is coincidentally also the Founder of the Flutter project at Google, had this to say about the current start of the state of the package ecosystem that lies at the core of Dart and Flutter:

The reason that Eric was asking for a list of importand & abandoned” packages was so that the community could decide which packages needed some more love. Such a list is important, for example, if you’re the FlutterFlow corporation and at a recent keynote, you’ve recently announced that you have $1M to contribute to the Flutter community (thanks Axel and Abel!).

For the rest of us, such a list is important if we’d like to get involved with the Dart and Flutter community and pitch in on some of the packages that need a little bit of love.

Towards that end, I’ve produced an initial list of the top 500 packages with a calculated love number”, which is equal to the number of days since publication divided by the popularity. This should really be called the needs love number,” since the higher the number, the more a package needs love.

I’ve got two ranges of numbers that sometimes cross into the red zone in this spreadsheet. The first is the number of days since publication, which is marked in red if the package hasn’t been published within the last 6 months. Similiarly, if the love # is >225 (180 days/80% popularity), I mark it as red.

provider & permissions_handler: have love

You can see several packages marked as red in this screenshot, the first being the provider package. The provider package has a good excuse for not being updated recently — the author has decided that he’d rather people use the riverpod package as a replacement. That’s fair and it says so in the notes (and you can suggest notes for packages you have info about in the publove repo). Even so, Remi is keeping provider relatively up to date, as evidenced by the fact that his love # is still in a good range. Likewise with permission_handler from baseflow is also still in a good range. Thanks to both of you for the great work!

font_awesome_flutter: has love

The font_awesome_flutter package also has a red love #. So is it in trouble? Let’s look at the repo:

Those are pretty respectable numbers, so I’d say that the folks at fluttercommunity.dev are doing pretty well with it. Thanks!

google_sign_in: needs love!

On the other hand, the Flutter team itself isn’t able to always keep up with demand. For example, the google_sign_in package has a love # of 291:

It does seem like the google_sign_in package would use some love. Do you have a special affinity for social auth? If so, perhaps submit a PR? You can tell by the numbers that PRs get a higher priority than just plain ol’ issues.

group_list_view: needs love!

The package in the list of the top 500 with the highest love # (1369) is group_list_view. Does it need love?

Having not been published in 3.5 years and not having a verified publisher, perhaps Daniel could use some help with this one, especially with a 95% popularity score and 1299 likes.

flutter_barcode_scanner: really needs love!

Probably the hardest kind of package to keep well-maintained in a plugin, which have platform-specific code. For example, the flutter_barcode_scanner package has some outstanding issues:

Even though this package has 1342 likes, it has been published in 3 years and has no verified publisher. It could clearly use some love.

Where are we?

I pulled this list together because I want the number of Dart and Flutter packages that get regular love to increase and to squeeze the red out of this spreadsheet. If you’re like to give me a hand with finding the packages that need love, feel free to lend a hand with the publove repo. The core data all comes from the excellent pub_api_client package that Leo maintains w/o any docs or support from Google (thanks, Leo!). Also, if you’re interested in pub.dev stats in general, I just stumbled onto the pubstats.dev site, which looks promising.

Scrolling through the publove list, there’s a lot of red. All of us working together is what makes Dart and Flutter great, so if you’ve got some free time and there’s something on this list that appeals to you, lend a hand!

April 11, 2020 flutter

Understanding Flutter: deep links on the web

Understanding Flutter: deep links on the web

NOTE: This content has been subsumed by the package I wrote to support declarative routing via the Navigation 2.0 API: go_router. This package is what I use now for my own projects and I hope that you’ll find it useful for your projects as well. Enjoy.

Lately I’ve been working on a little side project with a friend of mine and I’m the coder, so I get to pick the tech I want for the implementation. My choice is easy: Flutter! We’re targeting the web with our project for two reasons: 1) it makes distribution easy and 2) it enables us to do social media marketing with deep links to content that gets people to come to the site, sign up, … profit!

Of course, for that to work, deep linking has to work, which is a whole good news/bad news thing with Flutter web. The good news is: all the bits and pieces are there to enable deep linking. The bad news is, like a father in the wee hours of Christmas morning, there is some assembly required.

The sample code

This blog post builds up a very simple app step-by-step in a way meant to take you from the basics of navigation in Flutter (which I assume you already know) to the advanced considerations you’ll need to take into account for your Flutter web app to work well. All of the code from this blog post is available step-by-step as commit history in this github repo. The last commit represents best practices as far as I know them.

What is deep linking?

While mobile apps also support deep linking, for the purposes of this post, we’ll focus on the web-specific definition from Wikipedia:

deep linking is the use of a hyperlink that links to a specific, generally searchable or indexed, piece of web content on a website (e.g. ”http://example.com/path/page), rather than the website’s home page (e.g.,”http://example.com). The URL contains all the information needed to point to a particular item.

It’s that last bit (emphasis mine) that we care about — a shared URL that our Flutter app can use to get the user to the appropriate content.

Deep linking scenarios

There have been a few interesting articles lately on Flutter web as related to navigation and especially deep linking, like Deep Linking for Flutter Web by Rody Davis, which shows how to use fluro (“The brightest, hippest, coolest router for Flutter”) and Flutter web: Navigating URLs using named routes by Per Classon, a member of the Material team at Google (hi, Per!). However, while both articles are a great intro to the topic, neither of them covers everything I need to make sure that my app is a good web citizen.

If you’re not familiar with the basics of Flutter navigation and routes, I recommend the Navigation and routing page from the docs. However, I’m interested in more advanced scenarios that have specific implications on the web:

  • Navigating to sub-pages w/ parameters, e.g. /family?fid=452 or /family/452
  • Navigating to a sub-sub page w/ parameters and parent-child dependencies, e.g. /family/452/person/42
  • Error handling for navigation to a sub (or sub-sub) page w/ invalid parameters
  • Error handling for navigation to a page that’s not there

There’s a lot here, so let’s dig in.

Sample Data Model

To start, let’s say we want to navigate through a hierarchy of people grouped into families:

  • families
    • family
      • name=Sells
      • people
        • person: name=Chris, age=50
        • person: name=John, age=25
        • person: name=Tom, age=24
    • family
      • name=Addams
      • people
        • person: name=Gomez, age=55
        • person: name=Morticia, age=50
        • person: name=Pugsley, age=10
        • person: name=Wednesday, age=17

In Dart, this data can be represented like so:

class Person {
  final String name;
  final int age;
  Person({this.name, this.age});
}

class Family {
  final String name;
  final List<Person> people;
  Family({this.name, this.people});
}

class App extends StatelessWidget {
  static final title = 'Flutter Web Deep Linking Demo';
  static final families = [
    Family(
      name: 'Sells',
      people: [
        Person(name: 'Chris', age: 50),
        Person(name: 'John', age: 25),
        Person(name: 'Tom', age: 24),
      ],
    ),
    Family(
      name: 'Addams',
      people: [
        Person(name: 'Gomez', age: 55),
        Person(name: 'Morticia', age: 50),
        Person(name: 'Pugsley', age: 10),
        Person(name: 'Wednesday', age: 17),
      ],
    ),
  ];
  ...
}

This data can be arranged nicely to give us a sample to play with, e.g. a home page, a sub-page and a sub-sub-page.

image-20200411174338110

By the time we’re done, we want to be able to be able to copy a deep link to a person and for that to navigate correctly.

Tip #1: pass settings to MaterialPageRoute

Intuitively, because we want to navigate to sub-pages for each family and for each person, we know we have to pass arguments to our named navigation routes, as in this code.

class HomePage extends StatelessWidget {
  @override
  Widget build(BuildContext context) => Scaffold(
        appBar: AppBar(title: Text(App.title)),
        body: ListView(
          children: App.families
              .map((f) => ListTile(
                  title: Text(f.name),
                  // navigate to the /family page to show a specific family
                  onTap: () => Navigator.pushNamed(context, '/family',
                      arguments: {'family': f})))
              .toList(),
        ),
      );
}

class FamilyPage extends StatelessWidget {
  final Family family;
  FamilyPage(this.family);

  @override
  Widget build(BuildContext context) => Scaffold(
        appBar: AppBar(title: Text(family.name)),
        body: ListView(
          children: family.people
              .map((p) => ListTile(
                  title: Text(p.name),
                  // navigate to the /person page to show a specific person
                  onTap: () => Navigator.pushNamed(context, '/person',
                      arguments: {'family': family, 'person': p})))
              .toList(),
        ),
      );
}

class PersonPage extends StatelessWidget {
  final Family family;
  final Person person;
  PersonPage(this.family, this.person);

  @override
  Widget build(BuildContext context) => Scaffold(
        appBar: AppBar(title: Text(person.name)),
        body: Center(child: 
                Text('${person.name} ${family.name} is ${person.age} years old')),
      );
}

This code shows the basics of navigation that will work just fine for navigation inside the app. Click on a family and you’ll navigate to the /family route with that family as an argument and likewise for a person and the /person route. The arguments themselves are passed in a handy map.

Here’s the code to create the pages with the appropriate constructor args:

import 'package:flutter/material.dart';

void main() => runApp(App());

class App extends StatelessWidget {
  static final title = 'Flutter Web Deep Linking Demo';

  @override
  Widget build(BuildContext context) => MaterialApp(
        title: title,
        theme: ThemeData(primarySwatch: Colors.blue),
        home: HomePage(),
        onGenerateRoute: (settings) {
          final args = settings.arguments as Map<String, dynamic>;
          switch (settings.name) {
            case '/':
              return MaterialPageRoute(builder: (_) => HomePage());
                        // pull the family object out of the arguments object
            case '/family':
              final family = args['family'] as Family;
              return MaterialPageRoute(builder: (_) => FamilyPage(family));
            // pull the family and person objects out of the arguments object
            case '/person':
              final family = args['family'] as Family;
              final person = args['person'] as Person;
              return MaterialPageRoute(
                  builder: (_) => PersonPage(family, person));
            default:
              return MaterialPageRoute(
                  builder: (_) => Four04Page('unknown route: ${settings.name}'));
          }
        },
      );
}

Here we’re using settings.name to find out which route we’d like and constructing the appropriate page widget with data from the settings.arguments. That widget we then pack into a MaterialPageRoute and return, which Flutter uses to push the route onto the stack of routes and show it. There are other ways to do routing with Flutter besides implementing onGenerateRoute for MaterialApp, but only a couple ways that allow you to to parse the arguments passed to Navigator.pushNamed. This is the method I like best.

In the case that there is no matching route, we show a 404 page:

class Four04Page extends StatelessWidget {
  final String message;
  Four04Page(this.message);

  @override
  Widget build(BuildContext context) => Scaffold(
        appBar: AppBar(title: Text('Page Not Found')),
        body: Center(child: Text(message)),
      );
}

And what’s so wrong with this solution you ask? Well, for one thing, notice the address bar on the person page:

image-20200411193430649

That’s right; there’s nothing in the address bar except that base URL for the app. If it doesn’t change as you go from page to page in the app, then the user has no deep link to share. Luckily, we can fix this by simply passing the settings object passed to onGenerateRoute onto to the MaterialPageRoute constructor like so:

onGenerateRoute: (settings) { // take the settings here
  ...
  return MaterialPageRoute(settings: settings, // and pass them here
                           builder: (_) => FamilyPage(family));
...

As you already know, the settings object contains the name of the route, which Flutter than uses to populate the address bar, since we’ve been so nice as to supply it:

image-20200411194657671Now we have the name of the route in the address bar as expected. However, that’s still not enough for deep linking (in fact, we’re just getting started).

Tip #2: pass object IDs in the route name; not objects in arguments

If you were to simply refresh that page at this point, here’s what you get:

image-20200411195020036

The problem, which Flutter is doing a very good job of hiding, is that we’re trying to pull arguments out of a settings object that hasn’t got any. Those arguments are only set when we call Navigator.pushNamed. When we refresh the page, Flutter is taking the URL from the browser and doing the navigation itself, which ends up in our onGenerateRoute implementation. However, it doesn’t know anything about your app or it’s data, so settings.arguments is going to be null. Now you’re out of luck because you have no idea what data to build your page around. The only information you’ve got is what’s in settings.name.

So what do you do? That’s right — pack everything you want into the route name itself.

Time-out: A note about data models and unique IDs

If we’re going to pass references to objects around as strings, we’re going to need the objects to have unique identifiers:

class Person {
  final String id; // require unique id
  final String name;
  final int age;
  Person({@required this.id, this.name, this.age});
}

class Family {
  final String id; // require unique id
  final String name;
  final List<Person> people;
  Family({@required this.id, this.name, this.people});
}

static final families = [
  Family(
    id: 'f1', // provide unique id
    name: 'Sells',
    people: [
      Person(id: 'p1', name: 'Chris', age: 50), // provide unique id, etc.
      Person(id: 'p2', name: 'John', age: 25),
      Person(id: 'p3', name: 'Tom', age: 24),
    ],
  ),
  Family(
    id: 'f2',
    name: 'Addams',
    people: [
      Person(id: 'p1', name: 'Gomez', age: 55),
      Person(id: 'p2', name: 'Morticia', age: 50),
      Person(id: 'p3', name: 'Pugsley', age: 10),
      Person(id: 'p4', name: 'Wednesday', age: 17),
    ],
  ),
];

In this case, I simply added an id property to family and person objects, which sweeps many issues under the carpet. You have many questions to consider when adding unique IDs to your data model:

  • Are they globally unique? Anything with a globally unique ID can be shared globally, e.g. YouTube videos have globally unique IDs. Every user may not have access to every object in your app’s data, e.g. Google Docs all have globally unique IDs but not everyone can access every doc.

  • Are they unique to the user that owns the data? If so, the deep linking can still useful for that user, perhaps as a favorite link in their browser.

  • Are they unique to the parent object? In that case, the parent object must be available when looking up the object. In the case of our sample, notice that Chris and Gomez have duplicate IDs; clearly we’ll need a family object around to disambiguate when looking up a person object by ID.

  • Where do the IDs come from? It’s very easy to generate globally unique IDs; in fact, there are several Dart packages that generate unique IDs on demand. However, the problem there is storage cost and long URLs. Even so, globally unique IDs solve a lot of problems and are used a lot.

    On the other hand, you may create a new object, send it the database and then let the backend assign a shorter ID that’s still unique; lots of data storage solutions provide this service. The problem here is what ID do you use in the UI while you wait for the database to give you back the real ID, assuming you want to show the object prior to confirmation of synchronization with the backend.

    Or you may decide to let the client generate short but unique IDs, perhaps using max on a sequence of integers. The problem in that case is what happens in the case of a collision.

I didn’t say I had any answers to these questions; only that you have to consider them…

Time-in: passing object IDs in routes

Once have have unique IDs in your data, you can use them to pass around in your route names:

class HomePage extends StatelessWidget {
  @override
  Widget build(BuildContext context) => Scaffold(
        appBar: AppBar(title: Text(App.title)),
        body: ListView(
          children: App.families
              .map((f) => ListTile(
                  title: Text(f.name),
                    // pass the family id as a query parameter
                  onTap: () => Navigator.pushNamed(context, '/family?fid=${f.id}')))
              .toList(),
        ),
      );
}

Here we’re passing the family object’s ID via a standard web-style query parameter, e.g. /family?fid=f1. We can do the same thing for the person, but it needs two arguments:

class FamilyPage extends StatelessWidget {
  final Family family;
  FamilyPage(this.family);

  @override
  Widget build(BuildContext context) => Scaffold(
        appBar: AppBar(title: Text(family.name)),
        body: ListView(
          children: family.people
              .map((p) => ListTile(
                  title: Text(p.name),
                  onTap: () =>
                            // pass the family id and the person id as query parameters
                      Navigator.pushNamed(context,
                                          '/person?fid=${family.id}&pid=${p.id}')))
              .toList(),
        ),
      );
}

Parsing them is like parsing query parameters on the web:

onGenerateRoute: (settings) {
  // split out the real route name from the args, e.g. /family and fid=f1
  final parts = settings.name.split('?');
  final args = parts.length == 2 ? Uri.splitQueryString(parts[1]) : null;
  switch (parts[0]) {
  ...
  case '/family':
      // look up the family and complain if it's not found
      final family =
          App.families.singleWhere((f) => f.id == args['fid'], orElse: () => null);
      if (family == null)
        return MaterialPageRoute(
            settings: settings,
            builder: (_) => Four04Page('unknown family: ${args["fid"]}'));

      return MaterialPageRoute(settings: settings, builder: (_) => FamilyPage(family));

    case '/person':
      // look up the family and person, complaining if either is not found
      final family =
          App.families.singleWhere((f) => f.id == args['fid'], orElse: () => null);
      if (family == null)
        return MaterialPageRoute(
            settings: settings,
            builder: (_) => Four04Page('unknown family: ${args["fid"]}'));
      final person =
          family.people.singleWhere((p) => p.id == args['pid'], orElse: () => null);
      if (person == null)
        return MaterialPageRoute(
            settings: settings,
            builder: (_) => Four04Page('unknown person: ${args["pid"]}'));

      return MaterialPageRoute(
          settings: settings, builder: (_) => PersonPage(family, person));
  ...
  }
},

At the top of the code, we split a URL in the form /name?args, e.g. /family?fid=f1. We switch on the route name, using the query parameters to look up the objects we need to construct the page widgets, using our 404 page to complain if we can’t find them. Suddenly we’ve got the first glipses of deep linking!

image-20200411210303289

Not only have we baked the object IDs into the link but if we refresh or past the link into a new Chrome page, it will actually show the same content! Or, if you want to change the person to p2 by editing the URL directly, you’ll see Morticia!

image-20200411210446221

If you get a little too excited and enter an ID that’s not available, the app will let you know that you’ve stepped off the path.

image-20200411210638111

Time-out: A note about the # in the URL

By now you have noticed the # in the URL of your Flutter app, which isn’t typical of most web apps. This is a special designation for the browser when it splits the URL into two sections. Section one, to the left of the #, is for the HTTP server. This is how your Flutter app gets downloaded to your users’ machines.

Section two, to the right of the #, is used for client-only information, e.g. if you wanted to scroll to a specific anchor on your HTML page. Single Page Applications (or SPAs as the cool kids call em) use section two data to navigate between client-side pages” of the app without going back to the server. Since your Flutter web app is just a really fancy SPA, it uses section one of the URL to download the entire app onto the user’s machine and then uses section two for navigation between pages” as defined by Flutter. In fact, if you were to take the # out of the URL, bad things would happen, since the server won’t know what the heck you’re talking about:

image-20200411223958634

You can certainly fix this on the server with redirection, e.g. /person => /#/person. While you’re at it, you’ll also want to redirect 404s to your own Flutter-based 404 page. Apparently most web servers consume Apache .htaccess files, which is supposed to make this kind of server-side forwarding easy to do. I haven’t tried this myself so please let me know in the comments if this actually works. Thanks!

Time-in: do we have deep linking yet?

At this point, you might thing that you were done with our deep linking support for Flutter web. If only that were so…

Tip #3: construct pages with object IDs, not objects

We have a problem in how we look up objects in our code right now, although it might be hard to see:

onGenerateRoute: (settings) {
      ...
      // objects on the network are further away than they appear...
      final family =
          App.families.singleWhere((f) => f.id == args['fid'], orElse: () => null);
      ...
}

The onGenerateRoute method is synchronous; if requires you to provide a route immediately. However, what if your object is stored on disk or on the network or needs to be requested via carrier pigeon? Our little model is simple and not representative of the real world, so it’s deceptive; it’s going to be more common that you’ll need to load the data from somewhere when you’re mapping object IDs to their respective objects. To support that case, I recommend you get into the habit of passing the object IDs to the page constructors so that the page itself can do the asynchronous lookup. That way your page can show a loading indicator as well as an error message if lookup fails. Here’s a simulated example of such a thing:

class FamilyPage extends StatelessWidget {
  final Future<Family> family;
  FamilyPage(String fid) : family = _load(fid); // construct with an object ID

  // load object by id
  static Future<Family> _load(String fid) async {
    // simulate a network lookup...
    await Future.delayed(Duration(seconds: 3));

    final family = App.families.singleWhere((f) => f.id == fid, orElse: () => null);
    if (family == null) throw 'unknown family: $fid';

    return family;
  }

  // funky FutureBuilder code to show three states: loading, failed, succeeded
  @override
  Widget build(BuildContext context) => Scaffold(
        appBar: AppBar(
          title: FutureBuilder<Family>(
            future: family,
            builder: (_, snapshot) => snapshot.hasData
                ? Text(snapshot.data.name)
                : snapshot.hasError ? Text('Page Not Found') : Text('Loading...'),
          ),
        ),
        body: FutureBuilder<Family>(
          future: family,
          builder: (_, snapshot) => snapshot.hasData
              ? ListView(
                  children: snapshot.data.people
                      .map((p) => ListTile(
                          title: Text(p.name),
                          onTap: () => Navigator.pushNamed(
                              context, '/person?fid=${snapshot.data.id}&pid=${p.id}')))
                      .toList(),
                )
              : snapshot.hasError ? Text(snapshot.error) : CircularProgressIndicator(),
        ),
      );
}

The code to load the family object is just like before, except we’re doing it asynchronously to simulate a networked environment. If there’s an error, we throw it as a string, which is caught by FutureBuilder. The FutureBuilder code is funky because we’re using the result of our asynchronous loading in two places (the appbar’s title and body) and because we’re dealing with three states: loading, failure and success, all of which result in three different UIs, e.g. here’s the one for loading:

image-20200411220419590

Notice, however, that this is an environment in which we can take our sweet time loading data and giving the user progress indication, error messages, etc. This is unlike the onGenerateRoute method, which must return synchronously (and can now be greatly simplified):

onGenerateRoute: (settings) {
  final parts = settings.name.split('?');
  final args = parts.length == 2 ? Uri.splitQueryString(parts[1]) : null;
  switch (parts[0]) {
    case '/':
      return MaterialPageRoute(settings: settings, builder: (_) => HomePage());

    case '/family':
      // pass the family object ID and let the page look it up 
      return MaterialPageRoute(settings: settings, 
                               builder: (_) => FamilyPage(args['fid']));

    case '/person':
      // pass the family and person object IDs and let the page look them up 
      return MaterialPageRoute(
          settings: settings, builder: (_) => PersonPage(args['fid'], args['pid']));

    default:
      return MaterialPageRoute(
          settings: settings,
          builder: (_) => Four04Page('unknown route: ${settings.name}'));
  }
},

Now would be a good time to think about using the repository pattern in your code so that you can abstract away the data loading, use in-memory caching, etc. but that’s beyond the scope of this post.

At this point, we’ve got the basics working — you have a link that you can provide to your app that provides enough information for loading objects for each of your app’s pages. However, because we’re using a flat namespace for our URLs, the navigation may not work the way you expect.

Tip #4: use nested routes to build a stack

You may not have realized it, but every time you press Refresh on one of your app’s pages or press Enter inside the address bar (even if you don’t change the URL), you’re actually reloading your app, i.e. starting over from main. If you’re on the home page when that happens, then you reset the browser and app navigation stack of your app to have one entry:

Navigation Stack Index Route
Home page 0 (shown) /

Notice the lack of back button in your app in this case:

image-20200411222952342

If you click on the list entries to navigate to the person page, thus also going through the family page, you’ll have a navigation stack with three entries:

Navigation Stack Index Route
Person page 2 (shown) /person?fid=xxx&pid=yyy
Family page 1 /family?fid=xxx
Home page 0 /

However, if you do a deep link in a new browser instance to the person page, you’re getting a stack with just two entries in it:

Navigation Stack Index Route
Person page 1 (shown) /person?fid=xxx&pid=yyy
Home page 0 /

This leaves you in a strange place: if you navigate to the person page normally, pressing back gets you to the family page. However, if you just start from scratch with a deep link to the person page, Flutter’s back button takes you to the home page. You might ask: where did the family page go when I do deep linking?

A better question: why the heck is the home page there in the first place?!

The answer is simple: so that you have a way to get back to the home page instead of being stranded in the middle of the app with no way (sic) home.

The real question: how does the home page get onto the navigation stack?

At this point, we’ve actually found Flutter’s support for deep linking (yay!) When you start your app, Flutter (web or mobile or desktop or projected on the insides of your eyelids) will look for a route name from your MaterialApp initialRoute property. If it’s null (and it should always be null), then Flutter will get it’s initial route from the launching host. As I mentioned, Android and iOS both also have the concept of deep linking and this is how those are handled. In the case of the web, that initial route will come from the address bar. This is the behavior we want, which is why we should leave initialRoute as null.

If the initial route starts with a / then Flutter will start building a stack of routes, using / as the delimiter. For example, if we have the deep link /one/two, then it’s going to build a navigation stack like the following:

Navigation Stack Index Route
Two page 2 (shown) /one/two
One page 1 /one
Home page 0 /

This is what we want: you should use slashes in your routes so that the deep linking behavior you get is the same as the behavior you get when you’re navigating normally in your app. In the case of our sample, that means our person route really needs to change to use a / instead of the &, i.e. /person?fid=xxx&pid=yyy => /family?fid=xxx/page?pid=xxx.

Composing the route in the new form is easy:

Navigator.pushNamed(context, '/family?fid=${snapshot.data.id}/person?pid=${p.id}')

However, it’s parsing it that gets a little trickier, since we’re no longer looking for a route name that starts with /person but rather one that looks like /family(something)/person(something). The way to solve this problem is, of course, with regular expressions:

onGenerateRoute: (settings) {
  final name = settings.name;

  // start from most specific first
  if (name.startsWith('/family') && name.contains('/person')) {
    final re = RegExp(r'\/family\?fid=(?<fid>[^\/]+)\/person\?pid=(?<pid>[^\/]+)$');
    final match = re.firstMatch(name);
    if (match != null) {
      final fid = match.namedGroup('fid');
      final pid = match.namedGroup('pid');
      return MaterialPageRoute(settings: settings, builder: (_) => PersonPage(fid, pid));
    }
  }

  if (name.startsWith('/family')) {
    final re = RegExp(r'\/family\?fid=(?<fid>[^\/]+)$');
    final match = re.firstMatch(name);
    if (match != null) {
      final fid = match.namedGroup('fid');
      return MaterialPageRoute(settings: settings, builder: (_) => FamilyPage(fid));
    }
  }

  if (settings.name == '/') {
    return MaterialPageRoute(settings: settings, builder: (_) => HomePage());
  }

  return MaterialPageRoute(
      settings: settings, builder: (_) => Four04Page('unknown route: ${settings.name}'));
},

Instead of just blindly splitting the name looking for arguments, we have more complicated route names now, so we have to have a more nuaced way of recognizing them. Further, to reduce the complexity just a little, we now check the route names from more specific to more generic, which specifically allows us to look for /family routes with a simple use of startsWith, safe in the knowledge that if there was a /family/person route to be found that it would have been handled earlier in the code.

However, with that bit of code, now our deep link to /family?fid=xxx/person?pid=yyy gives us the same navigation stack as if we’d navigated there interactively in our app. In the case of deep linking, the Flutter back button on the AppBar is different from the back button on the browser.

In all cases, the browser back button will take us back to the page where we just were, so if we paste a deep link into a new browser window, the browser back button will take us back to the blank Chrome page — it’s only seen one of the Flutter app pages, so it’s got just one in it’s stack.

On the other hand, during the deep linking process, Flutter has built up a stack of routes as deep as appropriate based based on your initial route. When you press the Flutter back button, you’ll get to pages you might not have seen yet from that stack.

This might be confusing to the user, but it seems the most useful blending of the two worlds; the browser always takes you back to the last page you saw and the Flutter back button takes you to the logical page on the stack so you can always get back to home in your app. win-win.

And with that, we’ve finally in the state we want to be in; we have achieved deep linking in our Flutter web app!

Bonus Tip #1: better code through abstraction

You may look at the complicated logic for string matching and formatting littered throughout the sample code and think to yourself, there has to be a better way! In fact, there are several Flutter routing helper packages on pub.dev that you may want to check out. Myself, I didn’t understand what they were doing or why I cared until I dug this deep (sic) into the topic myself (and hence this blog post). That said, now that you understand it, too, you can build yourself a little helper that does the job for you.

abstract class Router {
  // derived classes implement these methods
  bool matches(RouteSettings settings);
  MaterialPageRoute route(RouteSettings settings);

  // helper for 
  static Route<dynamic> onGenerateRoute(RouteSettings settings) {
    final router = routers.firstWhere((r) => r.matches(settings), orElse: () => null);
    return router != null
        ? router.route(settings)
        : MaterialPageRoute(
            settings: settings, 
            builder: (_) => Four04Page('unknown route: ${settings.name}'));
  }
  
  static final routers = [
    // start with most specific one first
    PersonPageRouter(),
    FamilyPageRouter(),
    HomePageRouter(),
  ];
}

Here I’ve build a little Router helper class. It’s meant to be the base for every page-specific router in my app, e.g. a PersonPageRouter knows how to route to a PersonPage. The matches and route methods are for the derived class to implement. Further, the Router provides an implementation of onGenerateRoute based on the list of routers it also contains.

The magic is in the matches and route method implementations, like this one for the FamilyPageRouter:

class FamilyPageRouter extends Router {
  static final _routeRE = RegExp(r'^\/family\?fid=(?<fid>[^\/]+)$');

  @override
  bool matches(RouteSettings settings) => settings.name.startsWith('/family');

  @override
  MaterialPageRoute route(RouteSettings settings) {
    assert(matches(settings));
    final match = _routeRE.firstMatch(settings.name);
    return match == null
        ? null
        : MaterialPageRoute(
            settings: settings, builder: (_) => FamilyPage(match.namedGroup('fid')));
  }

  static Future<T> navigate<T>(BuildContext context, Family family) =>
      Navigator.pushNamed<T>(context, '/family?fid=${family.id}');
}

The matches method simple asks the router if the route name is a match. If it is, it asks the route method to parse the route name and produce the appropriate route. Notice also that if it can’t find the information it needs to build a route, e.g. no family object ID, it returns null. Flutter will use this to abort the routing, which is a handy way to stop the user from going anywhere with a malformed URL.

In addition, each router provides a static method called navigate that takes the appropriate arguments to be able to push the correctly formatted named route. This allows us to avoid sprinking that knowledge all over your code. Instead, to route to the family page app, you simple have to do the following:

class HomePage extends StatelessWidget {
  @override
  Widget build(BuildContext context) => Scaffold(
        appBar: AppBar(title: Text(App.title)),
        body: ListView(
          children: App.families
              .map((f) =>
                  ListTile(title: Text(f.name), 
                           // navigate to the page for the selected family
                           onTap: () => FamilyPageRouter.navigate(context, f)))
              .toList(),
        ),
      );
}

With this bit of refactoring, implementing the routing for the MaterialApp is very simple:

class App extends StatelessWidget {
  static final title = 'Flutter Web Deep Linking Demo';
  static final families = ...;

  @override
  Widget build(BuildContext context) => MaterialApp(
        title: title,
        theme: ThemeData(primarySwatch: Colors.blue),
        home: HomePage(),
        // all of your navigation is handled here
        onGenerateRoute: Router.onGenerateRoute,
      );
}

Adding new routes is now a matter of defining a new Router implementation and adding it to the list of routers. Easy Peazy.

Bonus Tip #2: slash-only URLs without parameter names

Since we’re now using slashes to separate sub-pages, notice that our URL is verbose and redundant:

/family?fid=xxx/person?pid=yyy

It’s pretty clear simply by the placement which is the family ID and which is the person ID. For that reason, many apps prefer a syntax like this:

/family/xxx/person/yyy

The trick here is that Flutter is going to attempt to get a route for each of the segments, even though segments like /family without the family ID don’t make any sense. However, so long as you don’t return null for the first one (home) and the last one (the person page in our case), it’s ok to return a null in the middle and Flutter will not place them onto the navigation stack. To enable this bit of magic, we want our routers to produce routes like the following:

Navigation Stack Index Route
Person page 2 (shown) /family/xxx/person/yyy => PersonPage
n/a n/a /family/xxx/person => null
Family page 1 /family/xxx => FamilyPage
n/a n/a /family => null
Home page 0 / => HomePage

With our little router helper, we can get the slash-based URL formatting we want by simply changing the regular expression and the route name to use a slash instead of ?fid=:

class FamilyPageRouter extends Router {
  // use / instead of ?fid=
  static final _routeRE = RegExp(r'^\/family\/(?<fid>[^\/]+)$');

  @override
  bool matches(RouteSettings settings) => settings.name.startsWith('/family');

  @override
  MaterialPageRoute route(RouteSettings settings) {
    assert(matches(settings));
    final match = _routeRE.firstMatch(settings.name);
    return match == null
        ? null
        : MaterialPageRoute(
            settings: settings, builder: (_) => FamilyPage(match.namedGroup('fid')));
  }
  
  static Future<T> navigate<T>(BuildContext context, Family family) =>
      // use / instead of ?fid=
      Navigator.pushNamed<T>(context, '/family/${family.id}');
}

In the case of /family, which should return null (because it has no family ID), the trick is the combination of the matches method, which knows that this route is meant to be handled in this router, and the route method, which sees that the data in /family isn’t enough to actual produce a route, so returns null. The person page works the same way, resulting in the following:

image-20200412010641533

Our new deep linking format produces the navigation stack we want. And, with that earlier bit of refactoring magic and the updated router implementations, we can decide to format the route names however we like without having to change any of the rest of the app’s code.

Where are we?

Adding deep linking to your Flutter app comes with considerations about how to format route names to get a navigation stack that makes sense to your users. For Flutter on the web, we have to pile on the fact that users can enter URLs directly into the address bar and share the URLs far and wide; this is often something we want (let’s go viral!), but comes with more considerations, including making sure that the route name shows up in the address bar, handling malformed URLs and packing object IDs into the route name itself. This in turn has implications about how to structure the data model and where and how to load objects asynchronously, if needed.

Flutter for web is relatively new and still in beta. The needs of web have pushed on Flutter at it’s core, including a new Navigation design proposal that should make it easier to do the right thing when it comes to routing in your app, whether it runs on the web or not. Regardless, by understanding what’s going on and following the best practices outlined above, you can build great Flutter apps on the web right now!

August 4, 2019 flutter

Fun with Curl and Dart

Fun with Curl and Dart

If you’re a Dart programmer, the curl command doesn’t really help you. Oh, it can tease you with of its wonderful functionality, but you still have to take anything you can do with curl and manually translate it into Dart code.

Until now.

Updating curlconverter.com for Dart

For almost 5 years now, Nick Carneiro has maintained a wonderful site for converting curl commands into network code for your favorite language: curlconverter.com.

I guess his first most favorate language must be Python, since that’s the default, but since last week, there’s been a new language listed: Dart. In addition to providing this site, Nick did one other thing: he exposed the underlying tech as an OSS repo on GitHub: curlconverter. And that repo is a wonder to behold, because it was factored in such a way that I could add Dart generation support for Dart in a weekend. Truly an example for other multi-language tools to follow.

Chrome DevTools: Network

The reason you care about turning curl into Dart is because the Chrome DevTools turn the curl command format into a universal capture format for requests in Chrome. That means that you can go to your favorite web site, dig around on the Network tab of the Chrome DevTools, right-click on the bit that you like and pull it out as a curl command.

When you do, you’ll end up with something like this:

curl "https://www.fantasynamegen.com/barbarian/short/" -H "Upgrade-Insecure-Requests: 1" -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36" -H "Referer: https://www.fantasynamegen.com/" --compressed

Even if you never run this as a curl command, you can still use it to generate code for your language of choice.

Dart networking code

Pasting the curl from Chrome DevTools into Nick’s curl converter website gives you Dart code that looks like this:

import 'package:http/http.dart' as http;

void main() async {
  var headers = {
    'Upgrade-Insecure-Requests': '1',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36',
    'Referer': 'https://www.fantasynamegen.com/',
    'accept-encoding': 'gzip',
  };

  var res = await http.get('https://www.fantasynamegen.com/barbarian/short/', headers: headers);
  if (res.statusCode != 200) throw Exception('get error: statusCode= ${res.statusCode}');
  print(res.body);
}

This is actually a complete program and if you put it into a file, e.g. fanname.dart, you’re almost ready to run it. You also need a minimal pubspec.yaml that looks like this:

# pubspec.yaml
name: fanname
dependencies:
  http:

The pubspec file lists the name of your app and it’s dependencies, specifically the http package. These two files together are enough to run the program and get the same output as the curl command:

$ pub get
$ dart fanname.dart

The call to pub get pulls down the dependencies and the call to dart runs your program. Of course, this assumes that you’ve got the Dart SDK installed and on the PATH. Assuming you do, you’ll have output that looks like this:

Actually, the output in this case goes on for a lot longer than that, so it isn’t particularly helpful, but with a little filtering, it can be:

$ dart fanname.dart | grep '<li>' | sed 's/<\/*li>//g'
Vadryt
Hildecon
Axra
Rordryt
Freyagar
Egelkele
Mand
Sigceo
Krokrolm
Ric
Vase
Hildekrucen
Rabeorth
Thakald
Morncrom
Lafthe
Garrak
Caror'n
Theodtarg
Ordhall

Where are we?

If the network code you want is already being executed via curl or on the web, that means you can take that curl command and turn it into Dart code with zero effort on curlconverter.com. The code from that site can be executed just like a curl command immediately or used as the start of your own code.

The example I was showing in this post generated HTML, but if instead the result was JSON, then you can take that JSON output and paste it into quicktype.io to get the JSON serialization/deserialization code for Dart, too. It’s a great time to be a Dart programmer!