Onboarding the AI Workforce: Common Concerns

Dilip Mohapatra

Adopting the AI Workforce: Addressing Key Concerns

In 2018, a survey conducted by Harvard Business Review explored business leaders' views on AI. At the time, around 75% believed AI would revolutionize their businesses within three years. Today, that percentage has likely reached close to 100%. Despite this, many companies still struggle to implement AI meaningfully, often limiting it to basic functionalities tacked onto their existing systems.

Through my conversations with business leaders who are onboarding their first AI agents, I often hear about the unique challenges they face when scaling and integrating AI solutions. Many of the concerns raised are valid and require careful consideration.

This article highlights some of the most common concerns and how businesses can address them while leveraging the power of AI in their operations.

“We are worried about models being trained on our data”

Businesses, like individuals, have a right to privacy. Their intellectual property is an essential asset that must be protected. For society as a whole, safeguarding intellectual property encourages innovation, driving economic growth.

Recently, there have been allegations that certain AI models, like ChatGPT, were trained on vast amounts of intellectual property without consent. Researchers from Indiana University were even able to retrieve sensitive information, such as contact details of New York Times employees, from these models.

The tech industry's track record regarding privacy and legal considerations is mixed, often taking the approach of “moving fast and breaking things.” For any business, this raises concerns not only about protecting intellectual property but also about fulfilling legal obligations to employees, investors, and customers.

In reality, AI agents need access to sensitive data to provide meaningful outputs. The key to navigating this concern lies in ensuring that the data AI systems access is not used to train models. A common misunderstanding stems from users entering sensitive information into services like ChatGPT, where the terms of service allow input data to be used for training purposes. This is not the case with Cognitiveview.

At Cognitiveview, we allow businesses to create AI agents using various large language models (LLMs), offering several protective options:

  1. We host the model in a private cloud, maintaining complete control to prevent any training on client data.
  2. We establish clear data processing agreements (DPA) with LLM vendors, ensuring they don’t train their models on the data we provide.
  3. We use paid APIs, which guarantee that no training occurs on the data shared with them.

“What if my AI agent generates false information or behaves unpredictably?”

Concerns over AI hallucinations—where AI generates inaccurate or fabricated responses—are well-founded. For instance, Air Canada recently lost a court case involving a passenger who was misinformed by an AI chatbot about the airline’s bereavement policy. While it may seem unwise for the airline to litigate such a case, it underscores the real-world impact of AI errors.

The unpredictable behavior of large language models (LLMs) is often attributed to their "black box" nature. Hallucinations generally occur due to factors such as:

  • The probabilistic nature of LLMs, where outputs are generated based on patterns rather than facts.
  • Incomplete or conflicting training data.
  • A divergence between source information and the generated output.

At Cognitiveview, mitigating these risks is a top priority. One of our core features is Knowledge, which allows businesses to store and manage their own datasets that are intelligently used in prompts. This ensures that AI agents base their outputs on reliable, business-specific information.

Effective prompting is crucial, though creating good prompts requires a deep understanding of how LLMs function, including their strengths and weaknesses. Given the emerging nature of AI technology, prompt engineering remains a critical but underdeveloped skill.

To simplify the process, we’ve developed a workflow builder that allows users to craft prompts using a visual interface. Through extensive experimentation, we’ve fine-tuned this system into a structure that LLMs can follow with high precision. This approach enables subject matter experts to effectively guide AI agents.

Looking ahead, we foresee experts collaborating with AI specialists to create libraries of AI agent templates. These templates will autonomously perform specific tasks and roles, with the flexibility to adapt to each company’s unique needs. Our success with our BDR (Business Development Representative) agent is just the beginning—more roles will soon follow.

“Customers, employees, and partners might resist AI adoption”

Human interaction remains essential, and earlier generations of chatbots and rules-based automation often provided a poor experience. This was due both to the limitations of the technology and the rigid way it was implemented.

Today’s AI agents are different. Powered by LLMs, they can analyze situations dynamically and make decisions based on context, rather than following a rigid set of rules. This allows them to complete a broader range of tasks independently, while also producing higher-quality outputs.

When we discuss AI integration with clients, we emphasize augmentation over replacement of their current workforce. Some tasks will always be better suited to human intervention. For instance, Air Canada’s recent misstep could have been avoided if the chatbot had recognized the sensitivity of the situation and escalated it to a human agent for proper handling.

Escalation is a key feature built into Cognitiveview’s AI agents. Our users can establish parameters for when AI agents should hand off a task to a human colleague, whether for approval, additional input, or complete takeover. Whether it’s managing customer interactions or resolving uncertainties in tax code entries, AI agents can exercise discretion to escalate issues when appropriate.

“AI is just hype—I’ve tried it, and it’s not that impressive”

It’s not uncommon for revolutionary technologies to be labeled as “toys” in their early stages. The personal computer was once seen as a hobbyist gadget with limited practical use, and the same was true for the internet, the car, and even mobile phones.

While today’s AI has room for improvement, we’re already witnessing its transformative impact on businesses across various sectors. AI may still be in its infancy, but its potential is vast, and it’s only getting better.

As businesses integrate AI agents into their processes, they encounter obstacles ranging from data privacy to customer acceptance. However, each challenge presents an opportunity for innovation and growth. The one thing we cannot ignore is that AI is reshaping the business landscape in ways that will challenge long-established norms.

At Cognitiveview, we believe that by 2030, every business will have an AI agent working alongside their human teams. These agents will handle increasingly meaningful tasks, allowing businesses to scale without needing to expand their workforce. Explore what we’ve built so far and see the future of AI in action.