Michael Bell


Michael Bell is the Director of Sullivan on Comp, a leading digital resource on California workers’ compensation law for attorneys and adjusters. Recently, Bell spearheaded the creation and launch of ChatSOC, leveraging Sullivan on Comp’s extensive content to provide answers to users’ questions about California workers’ compensation law.


From Treatise to Tech: The Development of ChatSOC, an AI Research Assistant


At Sullivan on Comp, the idea of using AI to enable users to ask questions about our legal content was an obvious opportunity. However, the path to implementation was not always clear; often, we could only see one or two steps ahead. Yet, we persisted and launched ChatSOC, an AI assistant that has been tremendously well-received.

Initially, we experimented with training an open-source AI model but the results were disappointing. We then shifted to a retrieval-augmented generation model, which was significantly more effective. Critical aspects of our success were finding the right developer and creating robust test data.

In this session, I will not only walk through the development of ChatSOC but also highlight actionable insights you can apply in your own projects. Expect to learn about overcoming technical challenges, making strategic decisions, and managing project costs effectively. This talk will equip attendees with the knowledge to navigate AI project hurdles and understand the investment and value such technologies can bring.


What inspired you to delve into AI and pursue a career in this field?

As publishers of Sullivan on Comp, a comprehensive legal treatise on California workers' compensation law, we saw an opportunity to leverage AI to enhance user experience. Traditionally, our users had to manually search through the content, which was time-consuming and required a certain level of expertise. The potential of AI to directly answer users' questions with precise summaries from our content promised to significantly improve their research experience, making it more efficient and accessible.

What do you see as the most significant challenges in deploying AI in production, and how do you suggest overcoming them?

One significant challenge in deploying AI is ensuring objective performance assessment. Initially, our evaluations were subjective, making it difficult to gauge improvements. To address this, we developed a comprehensive set of test data, which we now use to rigorously evaluate our model with each update. This approach provides clear, measurable insights into the effectiveness of changes, ensuring continuous improvement.

What advice would you give to product teams and developers just starting with AI projects?

Start with a clear vision.

When we started to pursue the creation of ChatSOC, we had no idea how we'd do it. But we did have a very clear target. And the product we built is very close to that original concept.

If you're unclear on what to do first, start by seeking expertise. Go to conferences, talk to experts, and engage in communities. We used platforms like Clarity.fm and Upwork to gain insights and identify initial steps.

But at some point, you have to jump in. Build something and try it out. Get it in front of real users as early as possible and learn from their feedback.

Once we started building, we gathered more and more data, and iterated extensively. Building great results in AI is more like running science experiments in a lab than it is programming deterministically. So, get creative, try things, and then assess the results. Take note of every result that doesn't match your vision, find another way to iterate, and then try again.

Connect with peers working on similar projects. Share experiences, approaches, and challenges. Ask for feedback and learn from their insights.