In projects involving the development of new products or offers, especially in the B2B segment, a common challenge arises: how can we be sure that what we offer is truly relevant to users? Recently, ICT Hub’s consulting team worked on one such task—validating a joint offer from two companies in different industries, aiming to differentiate themselves and attract new clients. What set this project apart was the integration of AI tools at every phase—from research to analysis—primarily through the use of custom GPT models. In this text, we share how this worked in practice and the lessons we learned from the process.
What was our task?
The joint offer we validated was the result of a strategic collaboration between two companies from different industries, targeting the same B2B segment. The idea was to jointly present a product that would deliver added value to customers—something they couldn’t get from the individual offerings alone. While the joint offer showed clear business potential, several important aspects needed verification before going to market. Our consulting team’s task was to support the validation process—not only to confirm that the offer made sense, but also to clearly map what adjustments were necessary to best meet the needs of the target audience.
Specifically, our task included:
- Understanding the needs of the specific B2B segment targeted by the offer
- Testing how relevant the joint offer is and at which stages of the client lifecycle
- Defining the value proposition for this offer
- Providing recommendations for adapting the package and communication
- Offering guidelines for the pilot and go-to-market approach
Approach and Role of AI Tools in the Process
In carrying out this task, we went through all the key phases of validation—from defining assumptions to concrete recommendations for further development of the offer.
Our approach included the following steps:
- Analysis of competitors and best market practices
- Defining the target group and mapping specific needs and behaviours
- Precise formulation of assumptions to be tested
- Designing the research process—a combination of surveys and interviews
- Collecting and structuring data
- Analysing and extracting key insights
- Providing concrete recommendations for offer design, pilot phase, and market launch
Throughout the entire process, we used AI—specifically ChatGPT—as support in various steps, but with clearly defined limits to its role. The greatest benefit was in preparation and analysis, where it proved to be a useful tool for speeding up the process, providing additional verification, and refining insights.
How We Configured Custom GPT Models and Ensured Data Security
From the very beginning, it was important to us that AI be integrated in a meaningful and controlled manner. We recognised that ChatGPT could be helpful, but only if it was used to support a clearly defined process and with full oversight of its actions and the data it handled.
We started with the basics—data security. Although we did not use data classified as sensitive, we worked with materials that were part of our clients’ internal analyses, research, and plans. Therefore, we immediately excluded the possibility of our data being used for model training. We submitted a request to opt out of the collection and processing of conversations (the so-called 'do not use conversational data to improve models' option), ensuring that everything we did remained within a protected environment.
Then we developed two custom GPT models with a clear division of roles. One GPT was used in the research preparation phase: it helped define assumptions, formulate questions, create interviews and surveys, and verify whether we covered all relevant aspects related to the target group and business context. The other GPT was dedicated to data analysis. Its role was to process research documentation and assist in identifying patterns, synthesising insights, and mapping recommendations—strictly within very precisely defined rules. This approach allowed us to separate the logic of each model’s work and avoid mixing contexts, which ensured greater accuracy at every step. For each model, we wrote a detailed set of instructions and limitations. For example, the configuration of the analysis model included:
- Using exclusively data from the documents provided (interviews, surveys, methodological documents)
- Avoiding adding or assuming anything not explicitly contained in those documents
- Maintaining a professional tone, without generating generic or overly polished responses
- Always using specific quotes from interviews as the basis for each conclusion
- Avoiding summarising or simplifying; instead, distinguishing nuances in the responses
- Using language and terminology from the documents to maintain consistency
We also provided detailed guidance on identifying insights—for example, clarifying that an insight is not just an individual’s opinion, but a pattern recurring across multiple respondents that can be connected to business implications. We included examples of true insights as well as counter-examples to illustrate what does not qualify as an insight. In practice, this configuration demanded precision but also required certain compromises, so we prioritised what was most important for the model to know upfront and planned to add additional context during the interaction.
How Did We Structure the Data and Use the Models in the Analysis?
For AI to play a meaningful role in the analysis, the key was how we prepared the data. That’s why we carefully considered the structure and format of everything we would input into the models.
The data came from several primary sources—company databases based on information from the Business Registers Agency (APR), surveys, and interviews. The survey data was already partially structured, but we further refined it to help the model navigate the responses more easily and identify patterns. During the interviews, notes were taken using the same framework, with consistent blocks of questions, to ensure comparability across respondents. This approach allowed us to extract high-quality insights and gave the GPT model a consistent environment in which to conduct its analysis.
In addition, the models had access to methodological documents containing the research context, target group details, core assumptions, and guidelines for drawing conclusions. This ensured that the model didn’t infer beyond its input, but stayed within a clearly defined framework.
In practice, GPT proved especially useful for processing large volumes of content, reviewing interviews, identifying recurring patterns, and providing structured insights supported by relevant quotes. It was also helpful when we wanted to double-check for anything we might have missed, serving as an additional layer of validation.
However, as the work progressed, our initial assumption was confirmed—AI is not an analyst. When it came to interpreting results, prioritising insights, connecting them to the business context, and forming recommendations, the model began to show its limitations. In some cases, it fell into a loop, repeating similar conclusions in different variations without adding real value.
That’s why we used AI as support and a tool, but never as a substitute for decision-making. The more complex parts of the analysis remained in the hands of the team, while GPT served as an accelerator, a double-check mechanism, a brainstorming aid, and a helper in shaping raw data.
What Did the Client Receive and How Was the Custom GPT Created?
At the end of the process, the client received much more than just a report with analysis and recommendations. In addition to validating the offer and clearly identifying what works and what needs improvement, we also provided concrete guidelines for:
- Designing the product and service package within the joint offer
- Planning the pilot phase
- Developing a go-to-market approach aligned with the development stage of the target group of clients
We also transformed all insights, documentation, interviews, surveys, and recommendations into a tool the client can use after the end of the project—a custom GPT model created specifically for internal use. This model enables teams across various departments—such as marketing, innovation, sales, and product development—to quickly and efficiently access user insights derived from the conducted research.
The model functions as an interactive assistant that understands:
- The habits, attitudes, and behaviours of users in the target group
- The criteria they use for making decisions
- The stage of the user journey when they are most ready to accept certain products
- How to communicate the value of the offer in language that resonates with them
In addition to delivering insights, the GPT model also supports ideation, whether for refining communication, developing new initiatives, or positioning products. All of this is possible without the need for additional analysis, revisiting documentation, or searching through folders, as all relevant data has already been entered and systematised within the model. The added value of this approach lies in the fact that the tool remains within the company, requires no additional training, and can be used in daily operations whenever needed.
AI as Support, Not a Substitute
This project confirmed that using AI tools in consulting makes sense—but only when there is a clearly defined framework and a precise understanding of what we want from the tool. Our approach was to assign a clear role and a specific task to each tool, and that’s where they proved to be most effective.
AI helped us analyse data more quickly, identify anything we might have missed, and shape insights more precisely and systematically. In certain stages, it accelerated the process and supported a clearer structure—but understanding the context, making recommendations, and crafting key messages remained firmly in the hands of the team.
In this sense, the project was much more than just the validation of a single offer. It was an opportunity to experiment, learn, and enhance our own methodology—to explore how AI can become true support in our work, rather than just a tool used for the sake of following a trend. We believe such tools will become increasingly present in business processes, and that the real differentiator is not the tool itself, but how it is used. If you're considering ways to modernise your approach to validation or market research, this experience may be a useful reference.
It certainly is for us.