Google AI Studio and Gemini API

Shankar Kumarasamy
4 min readApr 30, 2024

Google offers a powerful Gemini model for applications to use for generative AI purposes. Developers and AI enthusiasts can use Google AI Studio to understand Google’s AI offering’s different capabilities. At the same time, enterprises can directly use the Gemini API to integrate with the application.

1. Intro to Google AI Studio::

Google AI Studio is a cloud-based platform that acts as a bridge between the user and powerful generative AI models, like Google’s Gemini. It allows users with little to no coding experience to interact with these AI models and get outputs.

Here’s a breakdown of what Google AI Studio offers:

User-Friendly Interface:

No coding is required. AI Studio provides a simple interface where users can interact with the AI model through prompts.

Prompt-Based Generation:

The user guides the AI model by providing instructions called “prompts.” These prompts can be in text format or upload images to give the AI additional context.

Prompts can include one or more of the following types of content:

Summary of high-level prompt strategies when designing prompts -

  • Give the models instructions on what to do.
  • Make the instructions clear and specific.
  • Specify any constraints or formatting requirements for the output.

Fine-tuning with Examples:

Not satisfied with the initial response? AI Studio allows to “fine-tune” the model with additional examples. This helps the model understand preferences and generate a more tailored response in the future.

Integration with Vertex AI:

For experienced users, AI Studio integrates with Vertex AI, Google Cloud’s suite of AI tools. This allows access to more advanced features for further customization and control over the AI generation process.

Benefits:

  • Accessibility: Opens the door to generative AI for anyone, regardless of coding experience.
  • Idea Exploration: Helps the user brainstorm and explore creative ideas through AI-generated outputs.
  • Experimentation: Allows the user to experiment with different prompts and settings to see what the model can create.
High-level interaction of the user with Google AI Studio

Overall, Google AI Studio bridges the gap between complex AI models and everyday users. It empowers the user to be creative and explore the potential of generative AI in a user-friendly and accessible way.

2. Intro to Gemini API::

The Gemini API gives you access to Gemini models. Gemini models are built from the ground up to be multimodal, so they can be reasoned seamlessly across text, images, code, and audio. We can use these to develop a range of applications.

Generate API key: The first and foremost step is to create an API key, which can be generated from Google AI Studio.

API Overview: The Gemini API lets you use both text and image data for prompting, depending on what model variation you use. For example, you can generate text using text prompts with the gemini-pro model and use both text and image data to prompt the gemini-pro-vision model.

API Versions: The Gemini API currently has two versions, v1, and v1beta. This document provides a high-level overview of the differences between them.

Capabilities of the API:

A) Model Tuning — It is a technique to enhance the performance of the Gemini API text model (specifically Gemini 1.0 Pro and text-bison-001) for specific tasks. This is achieved by providing the model with a dataset containing examples relevant to the desired task. For instance, question-answer pairs could be used for summarization. By learning from these examples, the model improves its ability to handle similar tasks in the future. A tuning dataset can be created with as few as 20 examples, although 100–500 is generally recommended depending on the task complexity. Google AI Studio facilitates uploading this data or using existing prompts with examples. The tuning process involves specifying parameters like epochs (training iterations) and learning rate, with recommended configurations provided. Once complete, the tuned model can be employed for specific task requirements.

B) Function Calling — In generative AI models, function calling allows developers to define custom functions and integrate them with the model. These functions aren’t directly called by the model, but instead, the model generates structured data suggesting a function name and potential arguments. This data enables developers to call external APIs, incorporating the retrieved information back into the model’s response for more comprehensive answers. This empowers the model to interact with real-time information and various services, enhancing its ability to provide relevant and contextual responses.

C) Embedding — Text embeddings are a powerful NLP technique that converts words, phrases, and sentences into numerical codes. These codes capture the meaning and context of the text, allowing similar text to have similar codes. This lets computers analyze and compare text data, opening doors to various applications like information retrieval, text classification, and data clustering.

D) Safety — The Gemini API offers adjustable safety settings to control the type of content generated. By default, it blocks content with a medium or higher chance of being unsafe across different categories like hate speech or harassment. Developers can adjust these filters for their specific use case. For instance, a video game might allow more violence-related content compared to a children’s app. The focus is on the probability of harm, meaning even content with a low chance of severe harm might be blocked. To ensure user safety, developers should carefully consider the appropriate level of filtering for their application.

Share your thoughts on how you intend to use Google Gemini in your application.
Happy learning!

Originally published at http://shankarkumarasamy.blog on April 30, 2024.

--

--

Shankar Kumarasamy

Mobile application and connected-devices development consultant. Enthusiastic and excited about digital transformation era.