Rapid Therapeutic AI Development Using Tx-Gemma and ADK Integration

Artificial Intelligence is rapidly transforming how life sciences organizations analyze therapeutic data, predict drug responses, and accelerate R&D. However, building AI models for therapeutic applications is often expensive, computationally intensive, and challenging for teams without deep ML expertise.

This is where TxGemma, developed by Google, becomes a game-changer.

TxGemma is a suite of lightweight, fine-tuned language models based on Gemma 2, specifically optimized for therapeutic prediction tasks. It is designed to help organizations:

  • Generate predictions

  • Perform therapeutic classification

  • Run domain-specific text-based reasoning

  • Build powerful models with limited data and minimal compute requirements

Key Highlights

Parameter Sizes: Txgemma is available in 3 parameter sizes 2b, 9b & 27b

Training: Txgemma models are fine-tuned using TDC (Therapeutic Data Commons) datasets.

Modality: Supports text understanding, generation, and multimodal extensions.

Licensing: Open and flexible for research, development, and production.

Its architecture makes it ideal for use cases that require intelligence but cannot afford large infrastructure.

Why Fine-Tuning Matters in Therapeutic AI

Deep Biological Understanding Through Fine-Tuning

Fine-tuning plays a central role in enhancing Tx-Gemma’s performance for biomedical applications. The goal is to equip the model with a deep understanding of complex biological systems, allowing it to interpret therapeutic data with higher accuracy and scientific relevance.

Fine-tuning enables Tx-Gemma to understand:

  • Small molecule structures

  • Protein and nucleic-acid interactions

  • Disease-mechanism relationships

  • Cell line responses

By integrating this domain knowledge, Tx-Gemma becomes significantly more effective for tasks such as predicting drug toxicity, gaining insights into mechanisms of action, and classifying therapeutics.

The Competitive Edge of TxGemma Compared to Large LLMs

Feature Large LLMs TxGemma
Compute Cost High Low
Domain Knowledge General Therapeutic-specific
On-Prem Deployability Difficult Easy
Fine-Tuning Cost Very high Minimal
Speed Moderate Fast

TxGemma offers domain precision without heavy infrastructure, making it perfect for pharma, academic labs, and biotech startups.

Use Cases of TxGemma in Life Sciences

Drug Response Prediction: TxGemma analyzes cell-line, molecular, and therapeutic data to forecast drug behavior in specific biological environments.

Toxicity & Safety Assessment: It identifies potential adverse effects, off-target interactions, and high-risk compounds early in drug development.

Mechanism-of-Action Reasoning: TxGemma interprets MoA descriptions, classifies mechanisms, and aids in drug-repurposing decisions.

Therapeutic Classification: Supports ATC-level categorization, novel class inference, and multi-label biomedical classification for clinical/R&D workflows.

Scientific Summary Generation: Summarizes biomedical literature into concise insights, MoA notes, research findings, and clinical narratives to speed decision-making.


Txgemma Integration with ADK:

Once you deploy your Txgemma Model from Model Garden console, you’ll be able to collect the endpoint id from the list of deployed models and endpoints.

We have to use to this Model Endpoint ID in our ADK agent code,

Follow the sample code snippet:

Code Highlight
def build_payload(callback_context: CallbackContext, llm_response: LlmResponse) -> LlmResponse:

    summary = callback_context.state.get("context_summary", "Unknown")

    question = callback_context.state.get("refined_question", "Unknown")

    precautions = callback_context.state.get("precautions", "No specific precautions identified.")

    content = f"\\nContext: {summary}\\nQuestion: {question}\\nPrecautions to include: {precautions}\\nAnswer (with precautions integrated):\\n"

    payload = {

        "instances": [{

            "@requestFormat": "chatCompletions",

            "messages": [{"role": "user", "content": content}],

            "max_tokens": 3000,

            "temperature": 1.5

        }]

    }

    PROJECT_ID = "your-project-id"

    ENDPOINT_ID = "your-endpoint-id"

    REGION = "us-central1"

    access_token = subprocess.getoutput("gcloud auth print-access-token")

    url = f"https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}:predict"

    headers = {"Authorization": f"Bearer {access_token}", "Content-Type": "application/json"}

    res = requests.post(url, headers=headers, json=payload)

    try:

        data = res.json()

        response_text = data["predictions"][0][0]["message"]["content"]

        llm_response.content.parts[0].text = response_text

        callback_context.state["final_response"] = response_text

    except Exception:

        error_msg = f"Error parsing response:\n\n{res.text}"

        llm_response.content.parts[0].text = error_msg

        callback_context.state["final_response"] = error_msg

    return llm_response

tx_agent = Agent(

    name="TxAgent",

    model="gemini-2.5-pro",

    instruction=PAYLOAD_PROMPT,

    after_model_callback=build_payload,

)

You can expect rich, therapeutic context-aware responses like the example below demonstrates how TxGemma integrates with ADK to generate biologically grounded inferences.

Conclusion

TxGemma represents a new generation of AI: powerful yet compact, accessible yet advanced. It is the perfect model for developers, startups, enterprises, educators, and researchers who need fast, efficient, and responsible intelligence without the high costs of large-scale AI.

As the world moves toward responsible, democratized AI, models like TX‑Gemma will play a central role in making technology more inclusive, sustainable, and impactful.

Next
Next

Why Every Retailer Needs to Know About Gemini Enterprise