It's your data.
We will never sell it to anyone else or use it any way, other than to communicate with you, and you alone.
If you ever want to leave then all you need to do is request your data and we’ll send it to you.You can then ask to delete all your data which we’ll happily do.
Users share some of their deepest held ideas, desires, and personal information with our AI. We recognise this and embrace the great responsibility that comes with it. We are committed to bringing exceptional care to the management and stewardship of your data.
It’s not easy to talk about, but we’re all going to die one day. Tell us in advance what you want us to do with your data and we will carry out your wishes.
We will use your de-identified conversations to improve the quality of our AIs. De-identified means that we remove your name, phone number, email address and other identifiers from your logs before giving them to our model to learn from. We commit to never selling or sharing your data with any other party, under any circumstances without your explicit, plain English permission.
We will stumble, but we’ll focus on constant improvement.
As a field, we are just beginning to understand how to align AI with human values. We are still learning a great deal about how to ensure that these systems are safe, even as the landscape of risks and threats changes day to day.
We will not always get it right.
We are preparing to make mistakes. We are committed to taking feedback rapidly, learning from it with humility and constantly improving on the safety and reliability of our AIs.
We’re transparent about what we stand for.
Safety at its heart is a question of values. Companies choose what risks to prioritize, and how to address them.
We believe the best principle is to be deliberate about these choices, and transparent with our users about the specific values we build into our AIs.
We may prioritize values that you disagree with.
That’s OK. We think that there is room for many perspectives in the design of personal AIs, and that many alternatives will exist for whatever needs you might have. We commit to sharing publicly what positions we aim to take in our AIs.
We put your interests first.
Evie will always be on your side and aligned with your interests. Our goal is to help you clarify and articulate your personal intention so that you can teach Evie to constantly work towards serving you in the best way possible.
We never want to be incentivised to keep you engaged for the sake of it. We commit to creating a personal AI that is truly on your side and always puts your best interests first.
We design with you.
Your interactions with Evie will play a critical role in teaching it to be smarter and more useful for you over time. At the same time, you’ll also be teaching us as we work to better understand how people want personal AI to best fit into their lives.
Our approach is to design with - rather than design for - our core users.
We commit to being focused on the community of dedicated users and providing for them the best we can.
Our Approach
Safety sits at the heart of our mission and culture.
People rightly expect that the technologies we bring into our lives should be safe, trustworthy, and reliable.
Personal intelligence is no exception.Safety is an iterative process at Evie.
First, we establish clear safety policies that lay out specifically the values that we want to embed in our technology.
Second, we align the model through various technical methods to conform to the policy.
Finally, we have an ongoing process of review and improvement to verify that the model is complying with the policy, and to identify needed areas for adjustment.
By following this approach, our objective is to create the foundation of trust that will enable Evie to deliver on the promise of a truly personal intelligence.
This post provides an overview of our current thinking on each of these steps, but the framework is constantly evolving.
Interactive AI is in its earliest stages and far from perfect.
As we work to continually improve our techniques and methodology, we’ll share updates publicly on our blog.