top of page
image (10).png

Product User Guide & Advisory

How to Use AI as a Thought Partner

image (11).png

Understanding the Full Potential and the Limitations. 

While Custom GPT’s offer impressive capabilities in answering questions both correctly and concisely, it’s important to recognize that they are not infallible. They are not magic, and they do not process information like a human. Occasionally the GPT answers are constructed from data in such a manner that its replies are somewhat misleading or entirely wrong because of how it processes the structure of the data in its corpus.  It’s crucial to regard answers to questions which have serious implications as advisory only. 

These products are not validated for accuracy by any regulatory agency or authority. As such they provide supplemental information only. Although the data corpus may consist of legally binding documentation from competent authority, the generated answers are a distillation from that data, which may inadvertently formulate a misconstruction of the most accurate answer possible. (Surprise! – It makes mistakes on occasion).

A reasonable comparison would be a pilot supplementing his positional information with the moving map display on a tablet device while on a flight.  Such a display has technical and legal limitations and is not legally binding or compliant with federal aviation regulations (FARs) for the true and accurate presentation of position or velocity information. However, as an aid to provide situational awareness, such a depiction can provide incredibly useful supplemental information, even as it may contain some discrepancies from the approved navigational equipment. No pilot would ever be compelled to use a moving map display as the authoritative source for navigation.

Considerations:

These product designs are optimized to provide concise answers, and answers may be generated which are subtly truncated and more generalized than would be acceptable of a fully complete and technically accurate response.  For example: an airline fleet may consist of 20 aircraft in which one of the aircraft is somewhat unique. An inquiry about the flap speed limitations for the fleet aircraft at large may yield numbers that are indicative for most of the fleet, but not include the exception. The information is in the data corpus yet may not be presented on the first answer iteration, thus requiring that a follow-up question be asked. It’s important to realize that this is a very subtle snare that one may fall into if a person is expecting 100% accuracy from a highly generalized question. If one is aware that there are differences then it might be wise to pose the prompt in a more thoughtful matter, either from the start or as a follow-up prompt, such as asking if these limitations apply to the entire fleet. 

While GPT’s construct their answers from Large Language Models (LLM’s), they are never truly capable of lying to a user. Rather they generate results from a probability distribution over its entire word vocabulary coupled with tapping the constrained knowledge base in the case of a custom GPT. As LLM’s are stochastic (pseudo-random) word generators, there’s always a non-zero chance that the model outputs something unexpected that deviates from the truth. 

Insofar as the model ‘speaks the truth’, it is only as accurate as the truthfulness of its training data.

The model doesn’t evaluate the truthfulness of each word and statement; rather it generates responses based on statistical patterns and probabilities within the knowledge base information.

The model is not searching for the truth, but rather providing the best statistically reasonable response that resembles likely solution paths which the model had previously internalized.

To conceptualize how an error might be generated, consider that a number (say, 20) is commonly grouped inside of the knowledge base data near common terms like “angle bank”.  The GPT is commonly grasping at probable associations. So, in the scope of a specific limitation, a “20° bank angle” would not be a suitable answer for crosswind landing. However, the data corpus may include citations of “20° angle of bank” in the context of other information present in the data. So, a wrong answer in the context of one specific limitation may come to the surface, simply because it has a higher statistical likelihood of satisfying the answer criteria. 

Additional Considerations and Remedies for Custom GPT Products

Considerations:

Context-Specific Accuracy: The model may produce answers that are contextually accurate but might not be applicable to specific scenarios without additional context. For example, a legal query might result in a generalized answer that doesn't account for jurisdictional variations.


Temporal Relevance: The data corpus may include outdated information. The model might generate responses based on past data that are no longer valid or have been superseded by new information.

Bias and Ethical Considerations: The model can reflect biases present in its training data. This can lead to biased or ethically questionable outputs, particularly in sensitive areas like hiring practices, legal advice, or medical recommendations.

Complex Queries: For highly complex or multi-faceted queries, the model might generate a response that oversimplifies the issue. It is important to decompose such queries into smaller, more specific parts to obtain a more accurate and detailed response.​​

Remedies:

Provide Detailed Context: When posing a question, include as much relevant detail as possible. This helps the model generate a more contextually appropriate response. For instance, specify the jurisdiction when asking legal questions or the date range for historical inquiries.


Verify Temporal Information: Always verify the temporal relevance of the information provided. For critical decisions, cross-check the model's output with the latest data available from authoritative sources.

Prompt for Clarification: If an answer seems biased or ethically problematic, prompt the model to clarify its reasoning or provide alternative perspectives. Ask follow-up questions to explore different angles of the issue.

Decompose Complex Queries: Break down complex queries into simpler, more focused questions. For example, instead of asking for a comprehensive legal opinion, ask about specific laws or precedents and then integrate the responses.

Utilize Multiple Sources: Cross-reference the model’s answers with information from other reputable sources. When in doubt, seek human expertise to validate the responses, especially for critical decisions.​

End-users should strive to understand the limitations and proper use of the model for optimal outcomes.

Seek Ideas, Not Just Answers

Our innovative technology streamlines customer communication by automating repetitive tasks and enabling seamless, intelligent conversations.

Provide Ample
Context

Our innovative technology streamlines customer communication by automating repetitive tasks and enabling seamless, intelligent conversations.

Utilize Decision Frameworks

Our innovative technology streamlines customer communication by automating repetitive tasks and enabling seamless, intelligent conversations.

Enhance with Additional Data

Our innovative technology streamlines customer communication by automating repetitive tasks and enabling seamless, intelligent conversations.

Adopt a Persona

Our innovative technology streamlines customer communication by automating repetitive tasks and enabling seamless, intelligent conversations.

Challenge and Defend AI's Ideas

Our innovative technology streamlines customer communication by automating repetitive tasks and enabling seamless, intelligent conversations.

image (6).png

Discover What's Essential 
Your Expertise, Amplified

“Products for Professionals that are Swimming in Too Much  Information”

bottom of page