Kevin Ball


Kevin Ball is the Vice President of Engineering at Mento. He is an experienced web developer and entrepreneur, has co-founded and acted as CTO for two companies, founded the San Diego Javascript Meetup, is a panelist on the JSParty podcast, and organizes the AI in Action discussion group through Latent Space.

Kevin has been interviewed on front-end frameworks by Web Designer Depot and has written articles as a subject-matter expert for publications including Smashing Magazine, web designer magazine, Net magazine, creative bloq, and LeadDev. He has also spoken around the world including conferences like All Things Open, Web Unleashed, NodeConf Colombia, and React Amsterdam.


The Extract/Validate/Extrapolate Loop


LLMs as a building block for application development introduce new capabilities and challenges for software developers. Some of the fundamental challenges of LLMs are that they are unstructured, slow and can hallucinate. On the other hand, some of their fundamental strengths are their ability to be adaptable, flexible, extract structure from unstructured data.

The Extract/Validate/Extrapolate Loop

The extract/validate/extrapolate loop is a new UI pattern & application building block that attempts to utilize the strengths of LLMs while working around/compensating for the weaknesses. It looks roughly like this:


Use the LLM to extract some valuable information from a body of text or user interaction. This might a be a conversation, trying to understand what a user wants. Or it might be a more structured task, like summarizing or extracting data from a document.


Confirm the validity/correctness of your extraction. In many cases this might be a user interaction. For example, you might extract a structured piece of information via a function call, show it in the UI, and get the user to confirm. In other situations validation might be some form of automated eval, or transformation into a formally validatable format.


Take the now confirmed piece of information, and take action on it. This action might be to move to another LLM-based interaction, trigger some non-LLM based workflow, or even simply store the data for future use.


There are a number of benefits from building around the extract/validate/extrapolate loop. Some of the ones we’ve seen are: - Keep the LLM from going too far off the rails - Have confirmed pieces of information that you can move around/use in other parts of a software system - Provide a supportive UI that “feels” better than just text

Throughout this talk I will use examples from our experience building a user-facing “AI Coach”/self coaching tool at Mento.