Mobilewalla's Smart AI on Google Cloud

Success Stories

Industry: Telecom Media & Entertainment

About the Client:

Mobilewalla, a leader in analyzing US internet and phone user data, needed an advanced AI tool on Google Cloud. Their goal was to simplify complex data interpretation and provide actionable insights for business decisions, requiring a secure, scalable system leveraging services like Vertex AI and BigQuery to deliver clear intelligence from vast datasets.

The challenge:


Mobilewalla possessed vast, intricate broadband data that was challenging to interpret effectively. They required a streamlined method to convert this data into valuable information, such as identifying internet service provider market share. The system needed to be intuitive, manage large datasets securely using Google Cloud's robust security features, and integrate seamlessly with their existing Google Cloud environment. Ensuring data accuracy, privacy, and accessibility without system overhauls was paramount.

The solution:


Evonence constructed an AI-driven solution utilizing Google Cloud's Vertex AI for custom model training and BigQuery for analytics. Broadband data was ingested from Google Cloud Storage, processed, and securely stored in BigQuery. A web application, hosted on VM, enabled users to make natural language queries, which were interpreted by AI models on Vertex AI to provide clear, concise answers. The system was secured with Identity and Access Management (IAM).

Leveraging Google’s product suite:


Vertex AI, BigQuery, Cloud Run, Google Cloud Storage.

Cloud scale and speed:

The solution is architected for scalability and speed by leveraging core Google Cloud services capable of handling massive datasets without performance degradation. The system's backbone for vector storage and querying is Google BigQuery, which was chosen for its scalability and ability to manage large-scale data analytics, including vector similarity searches. All of Mobilewalla's broadband data is stored in Google Cloud Storage (GCS) and processed via BigQuery, which can handle massive amounts of data without slowing down. The foundational models were selected based on criteria that included efficiency, latency, and computational resource usage to ensure a responsive user experience in real-time applications. The system's design, using trusted Google Cloud services, ensures it is scalable for future data growth and use

Google AI-enhanced predictions:

The system uses Google AI, specifically Vertex AI, to deliver AI-enhanced predictions and insights from complex broadband data. The solution employs a Retrieval-Augmented Generation (RAG) technique, combining Google's powerful Gemini 2.0 Flash model with the Gecko 003 vector embedding model. When a user asks a question in plain English, the Gecko model converts it into a vector embedding to search for the most relevant data in the BigQuery knowledge base. The retrieved data is then passed as context to the Gemini model, which synthesizes the information to generate a clear, human-readable answer. This process turns raw numerical data into actionable insights, allowing users to understand complex trends like market share and customer churn without needing technical skills. The entire process is managed on Vertex AI, which serves the foundational models and supports prompt tuning to refine model performance.

Partner role in the project:


Evonence leveraged Google Cloud's Vertex AI and BigQuery to help Mobilewalla build an advanced AI system. They transformed complex broadband data into actionable insights, developed a user-friendly web application deployed on Cloud Run, and established a scalable, secure data infrastructure using Google Cloud Storage.

The results:


The system effectively translated complex data into clear insights, such as, "Comcast holds 98.87% of internet users in Mount Laurel." The Vertex AI-powered web app on Cloud Run simplified querying, enabling rapid, data-driven decisions. This solution significantly saved time, ensured data security through Google Cloud's infrastructure, and offered scalability for future data growth. Mobilewalla now possesses a reliable tool for enhanced data comprehension.

High availability:

The application is deployed on a Google Compute Engine Virtual Machine, providing a dedicated and controlled environment for the user-facing Streamlit web app. The core AI functionalities are delivered through Vertex AI, which serves the Gemini and Gecko models via managed, low-latency endpoints. This architecture leverages Google's managed AI platform, which is designed for security, scalability, and reliability. Using established Google Cloud services like Compute Engine, Vertex AI, and BigQuery ensures the entire solution is scalable and secure, forming a robust infrastructure for continuous operation. The system is designed to integrate with Vertex AI MLOps tools for automated retraining and versioning, further enhancing its reliability and maintainability.

AI-based predictions:

The AI-powered system provides specific, data-driven predictions and answers to user queries about broadband trends. It can interpret complex questions and deliver concise insights by querying a vast dataset of U.S. broadband usage. Users can ask questions in natural language and receive clear, accurate answers.

Examples of the system's predictive and analytical capabilities include:

  • Answering questions about market share, such as, “What’s the current market share of Comcast in the 30301 zip code?”.

  • Providing data on customer churn and provider switching, for instance, “How many households in Austin switched broadband providers last month?”.

  • Aggregating data across various geographic levels like cities, counties, or zip codes.

  • Generating precise, context-aware responses, as demonstrated by the sample output: "The available data does not include information on Comcast's market share in July 2024 if it had served 2.5 million households".

 

Customer Testimonial

It was a pleasure working with Evonence on a Google Cloud project for Mobilewalla that involved building a custom LLM solution tailored to our specific requirements. The team was well organized, explained everything clearly, and was always helpful throughout the engagement. They also assisted us in comparing Apache Spark computation costs between AWS and GCP, which gave us valuable insight for decision-making. Overall, it was a smooth and productive collaboration.

 

Related Success Stories

Previous
Previous

Popstock: AI Financial Literacy Through Pop Culture

Next
Next

Interflexion: Scaling AI Predictions with Google Cloud MLOps