AI Boosted Technical Consulting

by
Daniel Mahlow
March 18, 2022

The field of artificial intelligence (AI) has advanced rapidly in recent years, especially in the area of large language models (LLMs). In this article, we present practical examples of how to utilize such a model to streamline technical consulting tasks and boost overall productivity.

Technical consulting is a professional service that provides advice and assistance to businesses and individuals in a particular area of expertise, such as sales, finance, marketing, or technology. A technical consultant is usually an expert in their field who provides advice on how to improve the operations of a company or helps to solve specific problems.

Managing client expectations is one of the main challenges of technical consulting. Clients may not always be clear about what they want or need, and it can be difficult to scope out a project when the parameters are unclear. Additionally, clients may not always be receptive to the consultant's recommendations, which can lead to frustration on both sides.

Similarly, scope creep can occur when an agreement's scope gradually expands beyond what was initially planned. This can happen when new problems are discovered or when the client requests additional services. Scope creep can be difficult to manage because it can increase the cost of the project and the amount of time required to complete it.

Finally, technical consultants need to be able to manage their time effectively in order to meet deadlines and stay within budget. This can be a challenge when woking with multiple clients with different needs.

In general, technical consulting can be a rewarding but challenging profession. Communication and project management skills, as well as a deep understanding of the client's business, are essential.

Considering that large AI models, especially LLMs, have progressed significantly over the last few years, let's try to use them to improve consulting processes.

What are LLMs?

Language models are a type of artificial intelligence that can read and understand text.

A large language model (LLM) is a machine learning model that is trained on a very large corpus of text. As the model learns the statistical relationships between words, it generates coherent and "real-looking" original text. The can be used for a variety of tasks including but not limited to automatic translation, summarization, and question answering.

Language models have been around for a long time, but they have only recently become sophisticated and powerful. The release of the GPT-3 model by OpenAI in 2020 was a major milestone in the development of LLMs.

Due to the development of new training techniques and the availability of more training data, language models have become more powerful in recent years. GPT-3 is arguably the most powerful language model to date, but there are a growing number of alternatives, such as BLOOM.

How to interact with a LLM like GPT-3?

GPT-3 is a model that can be accessed via a web interface or API from OpenAI. Interaction is entirely text based. Text is pasted and written and the the AI is tasked to complete, extend, edit or replace the information it was given.

Since GPT-3 is a zero-shot model, you don't need to train it before you use it. There are times, however, when you may want to provide the system with some context first, such as keywords or various texts you want it to consider.

If you are querying GPT-3 for document summary and asking questions about primed knowledge, it is often useful be as specific as you can. As an example, rather than simply asking "What is the summary of this document?" or "What are the main points of this document?", you should ask "What is the summary of this document on the topic of [insert topic here]?". This will help GPT-3 to better understand your question and provide a more accurate response.

Use-cases for LLMs in technical consulting

💡 Text highlighted green is AI generated.

Summary and natural language querying

LLMs can be used to automatically generate summaries of documents, which can save the consultant time when reviewing a large number of documents. Additionally, LLMs can be used to answer natural language queries from clients. This can help the consultant to quickly find the information that the client is looking for.

Internal onboarding

Getting a team member up to speed on a new or existing project can be time consuming, as all requirements, developments, client interactions have to be relayed to the new consultant. When priming a LLM with the project details, past meeting/workshop notes and requirement documents, a consultant can quickly generate and ingest a project summary that is customized to their needs in terms of complexity and scope.

In addition, the AI can be used to monitor the progress of the project, and to identify any areas that need more discussion.

Ultimately, by utilizing large language models, you can streamline the process of onboarding your consultants, and ensure that everyone is on the same page from the start.

Provide questions for the client

After priming, the AI has an understanding of the project, progress and problems that need to be solved. When consulting clients, there is usually a need to constantly formulate and ask questions as they arise, to clarify certain topics. A LLM can be useful to generate some of the questions.

Data structure exploration

Data exploration is an important part of any data consultancy project. It allows us to understand the data that the client has, identify any relationships or correlations that may exist, and determine how the data can be used to answer questions that the client may have. It also helps us to identify any additional data that may be needed in order to move the project forward and solve any challenges that the client may be facing.

If we prime the LLM correctly, it is possible to query for interesting observations or inconsistencies in data structures, and use that knowledge to request new data or clarify any questions that we may have.

Documentation

A consulting project requires a significant amount of documentation in order to be successful. This documentation includes, but is not limited to, client requirement documents, meeting notes, data pipelines, and process flows. A LLM can be used to help produce this documentation, and can adjust it to various personas as needed. This is important, as a technical contact will require different documentation than a non-technical client contact. By using a LLM, the documentation process can be streamlined and made more efficient.

Caveats

Parameters

Modern LLM models expose a significant amount of parameters to the end user. This includes everything from selecting the appropriate engine (in the case of GPT-3, the Davinci engine is the most powerful, but also the most expensive) to adjusting temperature, penalty and a number of other parameters.

A lot can be accomplished by just using the default parameters when interfacing with such a model, but tweaking it just right can really improve the output.

Although the scope of this article doesn't allow going into too much detail here, just know that optimizing the model parameters for certain use cases is cruicial, especially when producing documentation or any other client-facing output.

Hallucination

One phenomenon that a large language AI model can experience is hallucination. This occurs when the model is presented with a new input that it has never seen before and interprets it in a way that is not consistent with reality. This can lead to inaccurate results and should be monitored closely.

Data privacy

When sharing data with an AI service, it's important to consider data privacy and security. AI services may collect and store data, including personal data, which could be used to identify individuals. This data may be used to improve the AI service's performance or for other purposes. It's important to be aware of how data is collected, stored, and used, and to ensure that appropriate safeguards are in place.

Replacing the human

Even if we reach the ever distant goal of General Artifical Intelligence, even a sophisticated LLM cannot yet replace a human consultant. The human aspect of consulting lies in the face-to-face contact, the building of relationships with clients, the fine nuances of interpersonal communication, and the successful management of high level projects. However, LLMs can be a great tool to focus and support those conversations and accelerate the project.

Cost

OpenAI charges a variable fee for X tokens, depending on what AI engine you interface with. The most expensive engine will charge about $0.6 (60 cents) per 750 generated words. The cost is mostly negligible when compared to the hourly rates of a consultant, but it’s worth controlling the cost like you would with any other pay-per-use SaaS tool (e.g. for infrastructure).

LLMs and beyond

As artificial intelligence technology continues to develop at an astonishing rate, so too do the potential applications for large AI language models. One such application that has caught the attention of many is text-to-image generation, which has the potential to revolutionize the way we create and consume visual content.

With text-to-image generation, businesses will be able to create realistic images from textual descriptions with little to no human input. This could have a profound impact on the way businesses create and market products, as well as how we consume news and information.

This is not an image search engine result page. These images are created from scratch via diffusion in about 10-15 seconds in a non-deterministic manner. No image will be generated twice.
Querying a text2image model and refining the input has become an activity called “prompt engineering”

For businesses, text-to-image generation could be used to create realistic product images for websites and catalogs. This would allow businesses to save time and money on product photography, and could also lead to more consistent and accurate product representations across different platforms.

Text-to-image generation could also be used to create news images, which would provide a more immersive and realistic experience for news consumers. This could have a particularly impactful effect on the way we consume breaking news, as images generated in real-time would offer a more immediate and visceral understanding of the events unfolding.

In the future, text-to-image generation is likely to become increasingly realistic and widespread, with businesses and individuals alike finding new and innovative ways to utilize this powerful technology.

Product ideation is about to accelerate
Dall-E is dyslexic, but a proficient artist

Summary

In recent years, artificial intelligence technology has progressed rapidly, particularly in the area of large language models (LLMs). These models can be used to streamline some of the workflows in technical consulting, such as identifying relevant documents or analyzing text data. Additionally, AI can be used to automate repetitive tasks, such as data entry or report generation. This can free up the consultant's time to focus on more strategic tasks.

LLMs can be used for a variety of tasks in technical consulting, such as summarizing documents or answering natural language queries. Additionally, LLMs can be used to help generate questions for clients, or to explore data structures. Finally, LLMs can be used to generate documentation for a consulting project.

While large language models have many potential applications, there are also some pitfalls to be aware of, such as data privacy and security concerns, or the potential for inaccurate results.

LLMs are also likely to become more powerful as new training techniques and more training data become available. This will enable the models to generate text that is even more realistic and fluent.

As LLMs become more widespread and more powerful, they are likely to have a profound impact on businesses and individuals alike.

💡 Get in touch with us at info@contiamo.com if you are interested in learning more about how LLMs and other artificial intelligence technologies can benefit you or your business.

About this article

This article was produced with ideation and drafting via GPT-3, editing via Wordtune, and of course, manual human review and editing.

Are you interested in more information? Let's talk about your current topics and challenges!
Contact