Select Page

Episode 5: Can we synthesize personality?

By Brinley Macnamara
Play Now


Show Notes

How do we go about designing personalities for machines? We asked the experts and their answers may surprise you.


Full text of transcript

Dr. Beth Elson (00:04):

So, a considerable amount of research has found that the traits of trustworthiness, conscientiousness those two in particular predict success across a wide variety of different work settings and different occupations, and then a trait like extroversion predicts success in particular kinds of fields.

Dr. Beth Elson (00:27):

So for example, sales people do very well if they’re extroverted and then other traits have been found also to produce work success, things like agreeableness, self-efficacy, locus of control, competency, and several others as well, especially those in what is known as the big five things like openness. And then also things like honesty and humility and warmth have been found to predict work success.

Brinley Macnamara (01:00):

Hello and welcome to MITRE’s Tech Futures Podcast. I’m your host, Brinley Macnamara. At MITRE, we offer a unique vantage point and objective insights that we share in the public interest. And in this podcast series, we showcase emerging technologies that will affect the government and our nation in the future.

Brinley Macnamara (01:20):

Today, we’re talking about the emerging field of synthetic personalities. In this episode, I hope to inform you about what synthetic personalities are, why they’re used in product design and a MITRE team’s recommendations for how we can leverage insights from human psychology to engineer synthetic personalities that will be more human friendly. But before we begin, I want to huge thank you to Dr. Kris Rosfjord, the Tech Futures Innovation Area Leader in MITRE’s Research and Development program. This episode would not have happened without her support.

Brinley Macnamara (01:54):

Now, without further ado, I bring you MITRE’s Tech Futures Podcast, episode number five. Early into our interview, Dr. Beth Elson, a Principal Behavioral Scientist at MITRE, told me something I felt was important to mention right up front: human personality is not well understood. In fact, according to Dr. Elson, as of today, psychologists have still not come to a consensus on a uniform definition of personality, let alone the complex cognitive and social phenomena that give rise to our distinctive personalities.

Brinley Macnamara (02:40):

Do personalities reside in our genes, or are they mere reflections of the people who surround us, or perhaps a combination of both? No one really knows. Thus, when a MITRE team set out to develop a framework for designing synthetic personalities, they had no formula for human personality on which they could base their work. So they started by narrowing their definition of personality to a set of traits that are associated with high functioning in work settings. And of course, I was curious to know why these traits in particular come in so handy when designing synthetic personalities.

Dr. Beth Elson (03:13):

That’s a great question. Essentially, the logic is that people tend to bring the same social cognitive processes to their perceptions of machines in certain contexts, as they bring to their perceptions of other people. And so in that sense, logically thinking about this, I think we would want to see synthetic personalities have the same traits that predict work success for people, just because people do bring the same perceptual processes to bear oftentimes with machines as they, or with certainly humanoid machines, as they do with other people.

Brinley Macnamara (04:00):

This tendency to project human characteristics onto the things we interact with is known as anthropomorphism. One obvious example of anthropomorphism is our practice of assigning names and personalities to our pets. And while the degree to which we attribute human characteristics to our laptops and smartphones is not known, according to Beth, research shows that the more a machine appears or behaves like a human, the more likely we will be to attribute human characteristics to that machine.

Brinley Macnamara (04:29):

This means it’ll be critical for our increasingly autonomous machines to have well designed personalities as we human end users simply cannot avoid the anthropomorphic instincts that shape how we will judge our machines. So now that we’ve established why the intentional design of synthetic personalities is so important, I’m going to try to answer the question of how an engineer could go about designing a synthetic personality. And according to Dr. Ozgur Eris, a Chief Scientist at MITRE and a researcher on the synthetic personalities project, the most important first step in designing a synthetic personality is to have the right goal in mind.

Dr. Ozgur Eris (05:06):

Our goal or our aim, or I don’t think anyone’s aim is necessarily to create a whole person out of a machine or put a whole person into a machine. It’s directly coupled to a mission context and the tasks that are associated with that mission context and making sure that the personality in the end is going to have the right effect and help the mission in a way, and to the extent the user is part of that mission, achieve those tasks.

Brinley Macnamara (05:40):

According to the MITRE team investigating synthetic personalities, there are roughly three well established strategies for designing a mission-oriented synthetic personality. The first strategy is to think of a machine’s personality as a function that maps a set of inputs like environmental stimuli to a set of outputs, like behaviors or emotions.

Brinley Macnamara (06:01):

So for example, if I wanted a vacuum robot to have a deferential personality, I might program it to detect an erroneous input like it ramming into my foot and respond with a remorseful output like apologizing profusely. The second strategy is to think of a machine’s personality as a set of traits that are expressed over the course of an interaction with an end user.

Brinley Macnamara (06:24):

So in this case, I would probably want a vacuum robot that scores high on the trait of agreeableness, which is often associated with being helpful, but low on openness and new experience. For instance, a vacuum robot that was prone to indulging curiosity about my house plants could very well end in disaster.

Brinley Macnamara (06:41):

The third and final strategy is to base the machine’s personality on a role model. So for example, I might model my virtual fitness trainer on Gordon Ramsey’s personality, but would probably want a more even keeled role model for my GPS’s personality. I asked Jeff Stanley, a Human Centered Engineer at MITRE and the Principal Investigator on the synthetic personalities project, about which of these three strategies that big tech companies like Google are using to design synthetic personalities for their products.

Jeff Stanley (07:12):

Yeah, this is some of the only design guidance we found on how to design a synthetic personality. So Google gives some advice if you’re creating a new application for Google Assistant, which is a voice assistant. And they recommend sort of a combination of the trait model and the trait approach and the role model approach.

Jeff Stanley (07:35):

So they say first choose some traits that you want your product to convey. And once you have a good core set of traits then think of some specific or imaginary person who embodies those traits. That’s the role model. And then kind of use that character as inspiration for writing content and choosing a voice and an image and whatever else needs to be done in the design process.

Brinley Macnamara (08:05):

And while many of us are perhaps familiar with the manifestation of synthetic personalities in our voice assistants like Alexa and Siri, many healthcare sponsors have taken interest in this field for the purpose of improving direct care for patients.

Dr. Sarah Corley (08:19):

When you are augmenting the direct patient care with electronic tools, I think then the synthetic personalities are very important.

Brinley Macnamara (08:31):

That’s Dr. Sarah Corley talking, she’s a primary care physician, but in our work for MITRE, she wears a hat of a Medical Informaticist.

Dr. Sarah Corley (08:39):

We’ve found that patients often are more forthcoming about personal questions with a computer than they are with a person. And we also know that patients often are more open and responsive to certain personality types of their physicians than others. That’s why people go to different physicians and prefer them.

Dr. Sarah Corley (09:08):

It’s not just because one is better than the other, it’s that some match their personalities better than others. So, if you are looking to use more and more tools to interact with the patients, particularly collecting these data elements that it’s really not part of the office visit, well, a lot of that can be obtained from the patient themselves, but you want to make sure that you have a tool that’s interacting with them that will collect the most accurate information in the most satisfactory manner for that patient.

Brinley Macnamara (09:49):

Despite the promises of synthetic personalities, the transformation of the user interface from the screens we know today to a much more human centric and seamless way of interacting with our machines like over voice, has had its fair share of growing pains. In fact, according to a team of researchers at Microsoft, the more inclined a human user was to expect human-like intelligence out of their voice assistant, the more likely they were to become disappointed when the machine inevitably failed to align with their mental model of what the voice assistant was supposed to act like. I asked Jeff and Ozgur about how better designed synthetic personalities could help to mitigate this problem.

Dr. Ozgur Eris (10:28):

So one take on that, maybe that, that’s actually an issue that can be partially, or maybe even fully mitigated via personality. Meaning, if the agent had humility, right, and in a humble manner communicated and by using also personality cues, and whatever personality was being expressed and whatever cues were associated with it were to communicate effectively that it has limitations and it’s not human, right, that may actually start to mitigate the issue to begin with so that the user’s expectations are preempted or managed.

Jeff Stanley (11:18):

So obviously a machine has to work well and no personality can make up for a machine that doesn’t do its job, but there are studies that show that machines that apologize and machines that like Ozgur was saying, warned the user that they might not be able to handle this particular task in this particular moment can make mistakes and still be seen as competent by their users.

Jeff Stanley (11:50):

And then even better than that is when a robot has a personality that kind of makes its abilities self-evident, for instance, to use a fictional example, if you think of C3-PO in the Star Wars movies, C3-PO is very cowardly whenever things start to get dangerous, so nobody expects him to be good at physical combat.

Brinley Macnamara (12:17):

In my conversations with the synthetic personalities research team, I couldn’t help but get the sense that when it comes to designing synthetic personalities for machines, there is one personality trait that should come before all others: humility. While there’s plenty of evidence to show that humans do engage in varying degrees of anthropomorphism when interacting with machines, there is an equally strong body of evidence to support a sharp decline in the affinity that humans feel with machines when we sense we are being deceived. This is known as the “uncanny valley.”

Brinley Macnamara (12:52):

Thus, to steer clear of the uncanny valley, we should think of synthetic personalities not as a way of replicating a human personality within a machine, but rather a tool for making machines into better helpers for their human end users. In other words, while synthetic personalities won’t be the key to making us want to grab drinks with our machines, they might very well be the keys to making better helpers for physicians, managers of our inboxes, and perhaps one day rush hour companions.

Brinley Macnamara (13:26):

This show was written by me. It was produced and edited by Dr. Kris Rosfjord, Dr. Heath Farris, and myself. Our guests were Dr. Beth Elson, Dr. Ozgur Eris, Jeff Stanley, and Dr. Sarah Corley. The music in this episode was brought to you by Sarah The Instrumentalist, Ooyy, Gustaav, and Truvio. We’d like to give a special thanks to Dr. Kris Rosfjord, the Technology Futures Innovation Area Leader for all her support. Copyright 2022, MITRE PRS # 22-0496, February 8th, 2022. MITRE: solving problems for a safer world.

Meet the Guests

Dr. Sara Beth (Beth) Elson

Dr. Sara Beth Elson is a Behavioral Scientist with The MITRE Corporation. She studies the ways in which people use technology and the implications of that usage for well-being, achievement, political activism, and other topics. In particular, she has studied the implications of social media use within diverse populations, including the military and populations experiencing political turmoil. She has also led the development of new techniques for analysing social media, with an emphasis on indicators of emotion and how changes in these indicators can be analysed mathematically. She holds a PhD in Social Psychology from Ohio State University.

Dr. Ozgur Eris

Dr. Ozgur Eris is the manager of the AI-enhanced Discovery and Decisions Department in the Artificial Intelligence and Autonomy Innovation Center. Ozgur applies his knowledge of design cognition and creativity–how humans conceive of, explore, and decide on alternative courses of action during complex systems design–in conjunction with AI techniques to address a variety of sociotechnical sponsor challenges such as multi-domain command and control, clinical decision support, and situation awareness in collaboration systems. Ozgur has over 25 years of technical, administrative, and entrepreneurial leadership experience across several organizations. Ozgur received his Ph.D. in engineering design theory and methodology from Stanford University.

Jeff Stanley

Jeff Stanley is a Lead Human-Centered Engineer in the Collaboration and Social Computing Department within MITRE Labs. At MITRE he conducts research on the social aspects of human-machine interaction. Complementing his work on personality synthesis, Jeff leads accessibility research on how chatbots can provide effective experiences to all kinds of humans with all kinds of abilities and needs. He also supports virtual reality studies in MITRE’s National Security Experimentation Lab in McLean, VA. Before coming to MITRE, Jeff developed interactive applications for a range of organizations including the U.S. Supreme Court, the Smithsonian Institution, Rosetta Stone, and Ubisoft. Jeff holds an M.A. in Anthropology from George Mason University and an M.S. in Computer Science from North Carolina State University, as well as undergraduate degrees in Linguistics and Computer Science from Duke University.

Dr. Sarah Corley

Dr. Sarah Corley is an experienced physician executive with particular expertise in health information technology (HIT). She is the Chief Medical Advisor for CGEM at MITRE Corporation. In that role she provides health care expertise to the portfolio teams to assist in solving problems for VA, HHS, DoD, and DHS sponsors. She previously was the Chief Medical Officer for NextGen Healthcare Systems, an Electronic Health Record (EHR) vendor. That role included providing direction to the executive team on the clinical aspects of the products and services provided. She managed a team of physician consultants who worked with clients to optimize their implementation of the EHR as well as to provide input to development on enhancements and modifications to the product. She also managed the Government and Industry Affairs team who tracked regulatory, legislative, and industry activities that would have an impact on the products and services provided as well as the clients served by them.  She oversaw the patient safety quality management process. That included working closely with QA and development to implement a formal quality management system, performing an FMEA on all new clinical development, analyzing defects reported to have the potential to impact patient safety, and assuring timely root cause analysis, notification of end users, remediation and reporting to the ECRI Patient Safety Organization. 

She served 2 terms as Vice Chair of the Electronic Health Record Coalition (EHRA) and as Chair of the EHRA Patient Safety Workgroup. She served as a member of the AMIA EHR 2020 task force and on the HIT Standards Committee’s Implementation, Certification, and Testing workgroup. She served a four-year term as Governor of the Virginia Chapter of the American College of Physicians (ACP) and a 6-year term on their National Medical Informatics Subcommittee. She represents the ACP on the Physicians Electronic Health Record Coalition. She was an investigator on AHRQ funded research in practice based research network evaluating quality improvement leveraging HIT. She received post-graduate training in Medical Informatics at OHSU. 

She practices part time as a primary care Internist in the metropolitan Washington, DC area. She has used electronic health records for the past 27 years and has spoken extensively on the subject. Her expertise is in the areas of regulatory oversight of HIT, physician compliance with meaningful use, MIPS and other regulatory delivery system reform projects, using HIT to improve quality, using HIT to improve patient safety, identifying risk in HIT, and optimizing workflows using HIT.