Gemini 1.5 Flash Quickstart Guide
Introduction #
On May 30th, Google released Gemini 1.5 Pro and Gemini 1.5 Flash. Flash is a particularly fast and cost-efficient model in the Gemini series. Today, we will show you how to query Gemini 1.5 Flash.
Getting started #
Start by opening the Replit tutorial this Replit template. Click “Use template."
Set up a Google API Key #
You'll need an API key. Head over to Google AI Studio and follow the instructions there.
Next, head over to Replit "Secrets" by typing "Secrets" in the search on the left. Add your key to the GOOGLE_API_KEY secret and save it.
Replit secrets are encrypted and a secure place to store your application credentials.
Call Gemini 1.5 Flash #
Replit takes care of all of the setup for you, so at this point, you can click "Run." You will see a response in the Replit Console. The response should be describing the image that lives in the Repl, image.png.
Congratulations, you have officially used a Google model! 🎉
Explaining the code #
Importing the packages #
The project starts by importing libraries. Libraries are the building blocks of software. In this case, we are using:
- os: This module provides a way to use operating system dependent functionality like reading environment variables.
- google.generativeai: This is the library from Google for interacting with their generative AI models.
- PIL.Image: This is part of the Python Imaging Library (PIL) which provides the ability to create, modify, and save image files.
Import the API key #
This line retrieves the API key stored in the environment variable GOOGLE_API_KEY. API keys are used to authenticate and bill your account. If your API key is exposed, anyone could use it, and you would be billed. The keys are highly sensitive, so we store them in Replit Secrets.
Generate content using the model #
This portion initialized the GenerativeModel, and them makes a request for the model to generate a response based on the question (What's in this photo?") and the image provided (image.png)
Print the response #
Finally, we print the response in the console, so we can see what the model responded with.
What's next #
Now that you have tested your first Google LLM, we recommend you try building this barista bot.
If you would like to try other projects, check out the remainder of our guides.