Building an AI Chatbot and LLM to Fit Aptive’s Corporate Needs

MEET ARCHIE

How we built a large language model tailored to our corporate needs

 

PART 2

In this follow-up article, Aptive continues its journey to create its own AI chatbot and large language model (LLM). Read part one here.

 

 

Written by Ramesh Kagoo and Anne Wright

 

 

 

 

 

From Concept to Reality: The Development Journey and Beyond

Aptive’s innovation team adopted a multi-disciplinary approach to building an AI chatbot from scratch, bringing together expertise in natural language processing (NLP), machine learning, software development and user experience design. Our process was iterative and guided by a clear roadmap, with each of the following four stages designed to address specific challenges and optimize performance:

Stage 1: Defining the Core Objectives
We began by defining the primary objectives of our AI chatbot. Our goal was to create a highly responsive, contextually aware conversational agent capable of handling complex interactions across multiple domains. To achieve this goal, we needed a deep understanding of user intent and natural language, as well as the ability to generate coherent and contextually relevant responses.

Stage 2: Designing the Architecture
We designed the chatbot architecture to ensure modularity, scalability and flexibility. For example, we designed a pipeline to integrate various NLP techniques, including intent classification, named entity recognition and sentiment analysis. By incorporating these elements, we ensured our chatbot would not only understand user input but also be able to respond with high accuracy and relevance.

To enhance performance, we also integrated a reinforcement learning framework, allowing the chatbot to improve over time through continuous feedback and data-driven refinement. This capability enabled us to create a chatbot that evolves with user interactions, providing increasingly sophisticated responses similar to a retrieval augmented generation (RAG) model.

Stage 3: Customizing the Conversational Flow
Unlike many off-the-shelf chatbot solutions, our approach prioritized deep customization of the conversational flow. We tailored the dialogue management system to handle specific use cases, from customer service to technical support and beyond. This way, our chatbot could deliver personalized and contextually appropriate experiences across different domains.

Stage 4: Ensuring Data Privacy and Security
Data privacy and security were critical concerns throughout the development process. By building our chatbot in-house, we maintained full control over the data, ensuring the highest security standards applied to all sensitive information. This approach also allowed us to customize data retention policies, ensuring compliance with industry regulations and client-specific requirements.

Transitioning to a Proprietary LLM

While the development of our AI chatbot was a significant milestone, we recognized that to fully unleash the potential of GenAI, we needed to build our own proprietary LLM. This next logical step was driven by the need for even greater control, flexibility and innovation in leveraging AI. Our model incorporated the following four features:

1. Custom Model Training
Developing our own LLM required extensive data curation and model training. We began by sourcing high-quality, domain-specific datasets to allow our model to excel in areas most relevant to our clients. Leveraging advanced techniques like transfer learning and fine-tuning, we trained our LLM to not only generate coherent text but also to align with the specific language and tone required by our industry use cases.

2. Scalability and Performance Optimization
Building an LLM from scratch also meant addressing challenges related to scalability and performance. We implemented a distributed system to process large-scale documents while minimizing delays. By fine-tuning the balance between computational efficiency and model complexity, we achieved high performance across a range of tasks, from generating content to answering complex queries.

3. Customization and Flexibility
One of the key advantages of our proprietary LLM is the ability to customize the model to meet specific client and organizational needs. Unlike generic models, which may produce results requiring significant post-processing, we designed our LLM to generate highly relevant, context-aware content with minimal need for additional refinement. This lets us create bespoke solutions that align perfectly with clients’ unique requirements.

4. Ethical AI and Bias Mitigation
Throughout the development of our LLM, we strongly emphasized ethical AI practices and bias mitigation. By actively monitoring and addressing potential biases in training data, we ensured our model produced fair, inclusive and accurate outputs. This commitment to ethical AI was central to our development philosophy and is a key differentiator for our LLM.

Building our own AI chatbot and proprietary LLM has empowered Aptive to push the boundaries of what’s possible with GenAI. Developing these solutions in-house has given us full control over the customization, scalability and security of our AI-driven products.

The PeopleOps Chatbot and Beyond

We developed our first AI chatbot for Aptive’s PeopleOps department to streamline internal processes by handling employee queries, managing documentation and assisting with routine tasks. Its successful deployment on our intranet significantly reduced manual work and response times, quickly becoming a vital part of operations.

This success spurred interest from other departments, leading to two more specialized chatbots:

  • Past Performance Library Chatbot (Bids and Proposals Team): To address the time-consuming task of manually searching through past project data, we built a chatbot that quickly queries our past performance library, enabling team members to find relevant project details faster, thereby improving the proposal-writing process.
  • VHA Contract Chatbot: As part of our contract with the Veterans Health Administration (VHA) to manage content on USAJOBS.gov, we developed a chatbot to enhance search functionality and automate tasks, improving content workflow management.

With these deployments, our proprietary LLM continues to evolve, allowing for more advanced and context-aware chatbots to handle increasingly complex tasks, positioning Aptive as a GenAI leader and driving innovation across departments.

The integration of GenAI and LLMs marks a significant milestone in Aptive’s journey toward digital transformation. By leveraging these advanced technologies, we are poised to deliver unparalleled value to customers while streamlining company operations. Our enhanced capabilities will particularly benefit our federal clients as they seek to transform their business operations, ensuring that we meet their unique needs with the highest standards of efficiency, compliance and security.

Building GenAI into Your Corporate Toolkit

PART 1

From Concept to Reality: Getting Started

At Aptive, innovation experts are leveraging GenAI to enhance operations and deliver advanced solutions for clients. By integrating generative models into their processes, Aptive automates content production, improves creative workflows and empowers decision-making with AI-generated insights.

Read our first article to learn why we chose a custom-built chatbot solution over third-party and open-source tools.

Read Part 1 Here