By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

AI Companions - more than just assistants

Stephanie Chan
March 12, 2023

The idea of human users fostering relationships with AI may seem like a distant future concept in sci-fi movies. However, we are much closer than we think - recent developments in the UX and sophistication of AI have made it so much more realistic and persuasive… making it possible to build relationships with it. You may be thinking - “We already have ChatGPT and BingChat, aren’t we there already?” Indeed, they are great models but we would classify them as more generic AI assistants, built on a vast but publicly available database with strict guidelines that ensure it assists in a neutral and objective way. For many purposes, such as a search or travel planning assistant, this is exactly what users need. But there are other purposes, such as friendships, mentorships, or therapy, where there is an element of listening or companionship, such generic assistants would be insufficient. These are use cases where trust and the relationship is crucial to the purpose. We know - it’s starting to sound like sci-fi again but humour us, we see a world where digital relationships could potentially be very valuable if high quality and well tested.

This is where we see AI companions differentiate - they are more than just assistants. Generic AI assistants such as ChatGPT and Bingchat have shown great UX improvements that have not only unlocked mainstream AI adoption, but will also likely set the benchmark for any future user-level AI interaction. After all, they (and most other models) are built off the same underlying LLMs. So what will an AI companion add then? If we think about our real friendships, each of them are unique. Conversely, one person cannot possibly be friends with everyone. Thus we see the key differentiator as the AI adopting a ‘personality’ to build trust with a user. While it may not be a real personality as we understand it, we believe it could be enough to give an impression of it to build the trust. We think this can be done in a combination of two ways:

1. Fine-tuning with guidelines - the ‘personality’

We saw the power of this in Bingchat’s beta release, where its internal alias “Sydney” was revealed and appeared to have an alternate ‘personality’ beyond a search assistant because it had fewer rules than Bingchat. These rules are what guides an AI to answer or phrase in certain ways that can give an impression of personality (or lack of). Generic AI assistants, such as BingChat, have many of these rules to ensure it maintains a neutral and objective stance - and rightly so for its purpose as a search assistant. For example, OpenAI released an excerpt of ChatGPT’s reviewers’ guidelines that restrict ChatGPT from having any one-sided opinion, ensuring it always shares multiple perspectives on any controversial issue. When such rules are modified or removed, as was in the case of Sydney, it communicated realistically and persuasively, shocking the world when it infamously took on different personas (Ben Thompson) and even proposed love (Kevin Roose). Of course, at this point, there is definitely a risk of an AI ‘hallucinating’ where it makes up factually incorrect content, but that may not necessarily be bad for the ‘softer’ use cases mentioned above. Story-telling doesn’t have to be fact. Empathy and Feelings are not fact. One can imagine how such personalities can make AI companions much more realistic.

2. Incorporating domain-specific knowledge - being ‘super fit’ with the use case

This is critical to differentiating and building defensibility of AI companions compared to generic AI assistants such as ChatGPT or BingChat, which are already trained on large public databases.  The knowledge should be aligned with an AI companion’s specific use case and be additive to it’s fine-turning through guidelines. The more proprietary or specific such knowledge is, the stronger the AI companion’s proposition. For example, an AI companion in a therapy or mental health guide context would be much more differentiated if trained on real scripts based on clinical methodologies. An AI companion in a elderly care context would be much more differentiated if trained to incorporate a user’s personalised medical history and family network. In both examples, the domain-specific knowledge, in tandem with the right fine-tuning guidelines that shape an AI’s ‘personality’, is critical to becoming super-fit with a particular use case. We believe that the more fit it can be, the more realistic and greater the ablity to foster trust with its user, above and beyond any generic assistants.

The future will be increasingly personalised AI companions

Projecting the above out further, there can be infinite fine-tuned versions of the same underlying model by adjusting these variables - i.e. the ‘personality’ guidelines and domain knowledge. We could even imagine AI companion ‘settings’ that users can adjust to reflect their preferences - perhaps an AI can adopt certain values or political views. Thus the future could see greater customisation or personalisation of an AI companion to each individual, which is beginning to sound more and more like our unique friendships. We believe that greater personalisation will only go to strengthen an AI’s ability to build trust and hence relationships with its users.

Imagine this - an AI companion accompanying an elderly grandma that knows all of her previous holiday destinations as well as her grandchildren’s names could easily converse with her to reminisce old memories or organise playdates with her grandchildren, just like a carer could. That might sound creepy and there are certainly outstanding moral and ethical questions, as will be raised later, but such a relationship could prove valuable to the elderly user if she really is very lonely and distant from her family, as with much of the elderly population today.

OpenAI, the company behind underlying LLM GPT3, also has such a vision and have publicly shared that they envision the ability to fine-tune could be applied to more than just a discrete number of use cases but to the nth degree. In the following diagram adapted from OpenAI’s article, we can see by fine-tuning a generic underlying LLM with guidelines and domain-specific information, there could be potentially be infinite versions of a model. Individuals could overlay their own dataset, apply their own customisation and they would have a personalised companion. Unique, just like real friendships. We are a lot closer than we think.

arbol de decisiones
Source: Inspired by OpenAI’s diagram

Until then, we start with specific use cases

We see AI Companionship working best in specific use cases where the need for fine-tuning is high compared to a generic BingChat. Cases with elements of listening or companionship and where a relationship is core to the purpose. For example, friendship or companionship for vulnerable segments such as younger users in an educational context or elderly users in a care context. In a future world of personalised companions, it may even extend to general friendship. There could also be a strong proposition for AI companions in therapeutic / mental health guidance context or in a professional mentor / coach context given their listening and non-human qualities. Another area of opportunity could be cases with utilitise proprietary expertise. For example, customised AI personas that embody the proprietary knowledge and database of professional services. We have already seen some emerging startups, such as well-known Replika and Anima in the friendship space, and  emerging startups tackling nicher use cases such as Melli for Elderly care, Clare&Me for mental health guidance. We have shared more below and have no doubt there will be more creative use cases to come.

ejemplos de IA

It’s also important to note that we still have much work to do apart from technical development, there are moral and ethical boundaries and risks yet to be resolved in AI companionship. We have previewed the risks of such technology already with BingChat/Sydney already where conversational AI can be very convincing but not always factual, which could become harmful. For that reason, the potential for AI companions may initially be limited in cases where fact is critical e.g. medical. But even in social use cases, there will need to be boundaries drawn to balance utility with safety and security e.g. to control for maliciousness or discrimination, like by adjusting AI guidelines. As OpenAI suggested, the wider society will likely have a greater role in defining such a balance as users and the community who are impacted by the product should have an influence, but it won’t be easy because there’s no straightforward answer. However, OpenAI’s embracing of feedback loops and public input is a huge step forward on this and will greatly encourage mainstream adoption, and we hope to see more flagship actors embrace the same.

AI Companionship is not as far off as we think and we are changing fast. There’s much work to do but at the same time, plenty of value to be unlocked in digital relationships. At Samaipata, we are incredibly excited about their potential role in the future. We want to see emerging startups that utilises both fine-tuning guidelines and domain-specific information to build a differentiated AI companion for a specific use case. Especially, those that can platformise by building out a product or network around an AI companion - after all, defensibility will be key in AI-led businesses. If that sounds lik you, we want to hear from you here!

Latest News

See also

More insights to better the world through technology

Scaling up: Insights from a Product Expert, with Laura Rueda

Scaling up: Insights from a Product Expert, with Laura Rueda

Laura Rueda, Principal PM Manager at Microsoft and a valued Operating Partner at Samaipata, has shaped product strategies at Typeform, Guidewire Software, Solera Global, and Datacite. In Madrid, she stands as a North Star for startups, guiding them through the complexities of product-led growth.
Read more
Think more critically about the role of AI in your startup, with Julien Simon

Think more critically about the role of AI in your startup, with Julien Simon

Julien Simon, the Chief Evangelist at Hugging Face and a valued Operating Partner at Samaipata, doesn't hold back as he delves into the common pitfalls and strategic imperatives of implementing AI within businesses. With a wealth of experience in AI and a keen eye for the practical applications of machine learning, Julien is here to share his insights and challenge preconceived notions in the tech industry.
Read more
The Hive Summit 2023

The Hive Summit 2023

The Hive is Samaipata's platform that aims to help achieve the success of our founders and seeks to foster a collaborative ecosystem and growth within our community. The Hive Summit 2023 was the first annual gathering for the Samaipata community and took place in Mallorca.
Read more
Unlocking business expansion with this strategic framework

Unlocking business expansion with this strategic framework

There comes a time within your business where you realise that your team needs additional skill set and expansion. Typically, product-market fit has been established and the opportunity to grow revenue and profit is there.
Read more

Scaling your customer service team: in-house or outsource?

As an early-stage startup, making the decision to manage your customer service team in-house or to outsource really depends on a variety of factors including where you are as a company in terms of your lifecycle, size and complexity, what your strategic customer service vision & goals are, and finally, what your financial resources and priorities are.
Read more

5 tips to retain top talent in a startup

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros.
Read more
arrow icon
arrow icon