
Product User Guide & Advisory
How to Use AI as a Thought Partner
.png)
Understanding the Full Potential and the Limitations.
User Advisory
While Custom GPT’s offer impressive capabilities in answering questions both correctly and concisely, it’s important to recognize that they are not infallible. They are not magic, and they do not process information like a human. Occasionally the GPT answers are constructed from data in such a manner that its replies are somewhat misleading or entirely wrong because of how it processes the structure of the data in its corpus. It’s crucial to regard answers to questions which have serious implications as advisory only.
These products are not validated for accuracy by any regulatory agency or authority. As such they provide supplemental information only. Although the data corpus may consist of legally binding documentation from competent authority, the generated answers are a distillation from that data, which may inadvertently formulate a misconstruction of the most accurate answer possible. (Surprise! – It makes mistakes on occasion).
A reasonable comparison would be a pilot supplementing his positional information with the moving map display on a tablet device while on a flight. Such a display has technical and legal limitations and is not legally binding or compliant with federal aviation regulations (FARs) for the true and accurate presentation of position or velocity information. However, as an aid to provide situational awareness, such a depiction can provide incredibly useful supplemental information, even as it may contain some discrepancies from the approved navigational equipment. No pilot would ever be compelled to use a moving map display as the authoritative source for navigation.
Considerations:
These product designs are optimized to provide concise answers, and answers may be generated which are subtly truncated and more generalized than would be acceptable of a fully complete and technically accurate response. For example: an airline fleet may consist of 20 aircraft in which one of the aircraft is somewhat unique. An inquiry about the flap speed limitations for the fleet aircraft at large may yield numbers that are indicative for most of the fleet, but not include the exception. The information is in the data corpus yet may not be presented on the first answer iteration, thus requiring that a follow-up question be asked. It’s important to realize that this is a very subtle snare that one may fall into if a person is expecting 100% accuracy from a highly generalized question. If one is aware that there are differences then it might be wise to pose the prompt in a more thoughtful matter, either from the start or as a follow-up prompt, such as asking if these limitations apply to the entire fleet.
While GPT’s construct their answers from Large Language Models (LLM’s), they are never truly capable of lying to a user. Rather they generate results from a probability distribution over its entire word vocabulary coupled with tapping the constrained knowledge base in the case of a custom GPT. As LLM’s are stochastic (pseudo random) word generators, there’s always a non-zero chance that the model outputs something unexpected that deviates from the truth.
Insofar as the model ‘speaks the truth’, it is only as accurate as the truthfulness of its training data.
The model doesn’t evaluate the truthfulness of each word and statement; rather it generates responses based on statistical patterns and probabilities within the knowledge base information. The model is not searching for the truth but rather providing the best statistically reasonable response that resembles likely solution paths which the model had previously internalized.
To conceptualize how an error might be generated, consider that a number (say, 20) is commonly grouped inside of the knowledge base data near common terms like “angle bank”. The GPT is commonly grasping at probable associations. So, in the scope of a specific limitation, a “20° bank angle” would not be a suitable answer for crosswind landing. However, the data corpus may include citations of “20° angle of bank” in the context of other information present in the data. So, a wrong answer in the context of one specific limitation may come to the surface, simply because it has a higher statistical likelihood of satisfying the answer criteria.
Remedies:
When there is something suspect about the nature of an answer, whether it just seems inaccurate or unreasonable for any reason, it’s best to prompt the GPT to clarify its answer. Follow-up prompting is highly recommended whenever there is doubt about an answer.
Mindful prompting helps tremendously. To make sure that exceptions are presented in the answer, it is recommended to frame the question in a way that uncovers variations. As in the case of asking for the previous flap speed limitations, realize that a better prompt foundation would be, “Please provide the flap speed limitations for every aircraft in the fleet”. That generates an answer that would cover all exceptions.
The products are designed to generate sources for each answer. However, occasionally the sources for the answers are not provided. When sources are not disclosed at the end of the answer, to gain additional confidence in the answer it is highly recommended to then ask for the source. Simply apply the prompt: “What is the source?”, or “provide source.” Such a follow-up will generate a satisfactory source.
Remember that the answers provided by the product are for educational purposes only and are not legally binding. Some answers to thorny questions require follow-up with sourced material.
Seek Ideas, Not Just Answers
Ask for ideas, not answers. If you ask for an answer, it will give you one (and perhaps a very good one). You can also use it as a thought partner, in which case it’s better equipped to give you ideas, feedback, and other things to consider. For optimal value, try to maintain an open-ended conversation that keeps evolving, rather than rushing to an answer. Of course, if you need an immediate answer that’s what you ask for.
Enhance with Additional Data
Give it additional data. Feel free to upload added PDFs — business plans, strategy memos, household budgets — and talk to the AI about your unique data and situation.
Provide Ample Context
More context is better. The trick is to give AI enough context to start making associations. Having a “generic” conversation will give you generic output. Give it enough specific information to help it create specific responses (technical constraints or sticking points, your company valuation, your marketing budget, your boss’s negative feedback about your last idea, an MRI of your ankle). And then take the conversation in different directions.
Adopt a Persona
Ask it to adopt a persona. “If Elon Musk were co-CEOs, what remote work policies would he put in place for the management team?” That’s a question Google could never answer, but an LLM will respond to without hesitation.
Utilize Decision Frameworks
Ask AI to run your problems through decision frameworks. Massive amounts of knowledge are stored in LLMs, so don’t hesitate to have the model explain concepts to you. Ask, “How would a Captain; CFO; team leader; board member, tackle this problem?” or “What are two frameworks (CEOs) have used to think about this?” Then have a conversation with the AI unpacking these answers.
Challenge and Defend AI’s Ideas
Make the AI explain and defend its ideas. Say, “Why did you give that answer?” “Are there any other options you can offer?” “What might be a weakness in the approach you’re suggesting?”
