Exploring an AI-enhanced Book Sprint: A principles-first approach

The rise of artificial intelligence (AI) means people are discovering how they might leverage this new technology for various tasks – at school, at work, and maybe even at a Book Sprint. 

With this in mind, this year we explored what an AI-enhanced Book Sprint might look like by testing some key assumptions and tools in some of our Book Sprints this year. We’re excited to share the insights we’ve learned from these tests that will help us determine the best way to move forward with AI in Book Sprints.

Our Approach

Each company has its own response to the rise of AI – whether that is to reject it, embrace it, or move towards something in between. Book Sprints’ response to AI might be counted as a something-in-between. We are open to exploring its benefits, as long as we stay grounded in three key principles: ethical and transparent use; productivity; and, accuracy and trust. 

These principles shape the role we see AI playing in our Sprints and how we can therefore control, limit, and maximize its use. These have guided us in understanding where AI might be useful in a Sprint, and testing that assumption accordingly. 

Principle 1: Ethical and Transparent Use

For us, it is important that AI use is ethical and transparent. What does that mean in practice? 

While the highly complex legalities and ethics of AI are still being widely discussed, and largely differ from place to place, we emphasize that there must be clear intention and effort to use AI ethically in a Sprint. This means that to the best of knowledge and according to some standards of fair use, its application must be ethical. 

Transparency, in this context, refers to how we might be able to disclose that AI-enhanced tools were used to support the production of a body of work. There are many different examples we explored of how to do this – from disclosing every prompt used to only disclosing bottom-up AI works.

Principle 2: Productivity

It is also important to us that AI-enhanced tools in a Sprint contribute to the productivity and efficiency of the Book Sprint process, instead of taking away from it. This means that the use of AI cannot just be producing long blocks of text and then needing a human to painstakingly edit it down; that would result in taking more time and effort than less. 

Instead, we focused on identifying small, controllable use cases to prevent the AI use from getting out of hand. While keeping in mind that the authors will and should remain the experts (they’re not being replaced with AI), we found that the cases in which AI-enhanced tools might be most useful are in automating repetitive, tedious tasks that our authors often deal with in a Sprint. 

Principle 3: Accuracy and Trust

Finally, it is important to us that the final output of any product using AI-enhanced tools is accurate and trustworthy. It is well-established by now that large language models hallucinate; thus, we must ensure that the process always involves a human checking or editing the AI output. A human is always involved in the process. 

Another way we found to ensure the accuracy and trustworthiness of the product – and at the same time respecting the experience and expertise of our partners – was to utilize the body of knowledge our partners could already provide prior to the Sprint. Our book editing platform Ketty, developed by Coko Foundation, has a feature that allows us to upload a knowledge base from which the AI would source its responses and material. This way, the prompts are targeted at drawing knowledge from what has already been established by experts as accurate and trustworthy. 

Finally, we decided to focus on low-risk use cases for the AI-enhanced tools, such that error-correction and revision would be easier to control. 

The Use Cases

We tested several small, low-risk, controllable use cases for the AI-enhanced tools during our experiments with some Book Sprints this year. Here are the specific use cases we tested:  

Create content based on the knowledge base

The knowledge base feature of our book editing platform Ketty allows our author-experts to upload their previous works that the AI will reference in responding to user prompts. We worked with experts for prompt engineering to create some prompts that our authors could use. These prompts were very detailed, including the scope of the book, the intended reader, and notes on the style guide. Each prompt would create content based on the pre-existing materials in the knowledge base and already following the style guide.

Apply style guide across the book 

For this use case, we wanted to test the ability of the AI to create cohesion in the tone, language, grammar, spelling, and similar aspects of the book based on the style guide. This would be useful not only in cleaning up the written material but also in aligning the writing styles of anywhere between 10-15 different authors. Keeping in line with our values, we also designed the feature such that the AI-enhanced tool would submit feedback and propose changes to the authors. This allows humans to remain in the loop for all changes made to the final product. 

Create glossary of terms, list of acronyms, and propose headings

For this final use case, we focused on the smaller tedious tasks that authors have to deal with – such as creating a glossary of terms, a list of acronyms, and writing appropriate headings. Similar to the previous use case, the headings would be suggested by the AI tool for human review/approval. 

The Results

We ran these use cases with AI-enhanced tools in some Book Sprints this year, with groups that differ in composition, size, author background, and openness to AI. With the varying demographics of the authors, we caution to take these results with a grain of salt and with an open mind. Here are some key insights and debunked assumptions we discovered, aligned with our values discussed earlier: 

Lesson 1: On Ethical and Transparent Use

AI could be used in a text in a variety of different ways – even our own use cases vary on the extent of involvement of AI in the content production. This means that the disclaimer used in the final product regarding the use of AI in the content creation process would differ from Sprint to Sprint, aside from the requirements of different publishers that these books might be going out to. We discovered that some authors might be uncomfortable with this disclosure of use of AI, primarily because they still feel that this is their work and was not created bottom-up using solely AI. Thus, there may be challenges due to the intricacies of AI-use disclosure. 

Lesson 2: On Productivity

We tested several different use cases to enhance productivity during a Book Sprint, with varying results. 

First, the authors found that short, conversational prompts like they are used to on common accessible AI tools like ChatGPT are more intuitive than the large prompts that we engineered for testing. Our long paragraphs and self-explanation were not as favorable to some of our authors than the back-and-forth prompting that they were already familiar with. This is a valuable insight as we move forward with testing. 

Second, by and large we found that authors prefer to write their own content. It was much easier for them, as experts, to be the one to write the content and then maybe have AI assist in editing afterwards. They found it would be more difficult to have AI create the content, even if it is based on the experts’ knowledge base, because they are aware the model might hallucinate and they’ll have to comb through it and edit anyway. In this regard, it seems authors might prefer having the AI suggest edits more than create content. 

And finally, authors did find it beneficial to have AI-enhanced tools start off their tedious processes like the glossary and list of acronyms. However, they reiterate that the AI provides a good jumping off point – a good amount of human intervention, so to speak, is still needed after the creation of these starting points. 

Lesson 3: On Accuracy and Trust

As mentioned, there generally seems to be reluctance among authors to use AI for content creation. Other than the reasons mentioned, this reluctance also stems from how varied and unique the contexts are for Book Sprints. Each book is different, each context is different, and so authors prefer to be able to write directly for that specific context rather than edit anything AI-generated. 

In Conclusion

We’re thrilled to have been able to experiment this year on some AI-enhanced tools in Book Sprints. The lessons we learned from our foray into the Book Sprints and AI space, as guided by our key principles, have shown us the benefits and potential pitfalls of integrating AI-enhanced tools in the process. 

We are grateful to our partners who have allowed us to test our AI-enhanced tools in their Book Sprints and for sharing their experiences with us candidly. We are going back to the drawing board for a bit with these new insights to iterate, as well as to watch out for any new developments in the AI space that might prove beneficial to our experiments. Stay tuned for more updates! 

 

Keep an eye out for more Sprint updates in the future! Never miss an update with us by following us at the links below. 

 

Got a great idea? Tell the world with us through a Book Sprint. 

Send us a message on IG, LinkedIn, or at contact@booksprints.net

 

Warning: Unsupported Browser

Your browser is old and unsupported. You may still use the site, but functionality will be limited and you may see errors. Please consider updating your browser.

This site uses analytical cookies.

To learn more about how take care of your data and how we use cookies to improve your user experience, please view our privacy policy.