iLet Research and testing

"I trusted it, after I learned it. It had to learn me, but I had to learn it."
“People obsess over the number, but physiologically it is a range anyways.”


Like automated cars and planes today, there is still a human at the wheel - this device is no different. And so, the iLet must be designed for that person - not the industry it comes from.

While this device is intended to automate as much of the work as possible, it is limited by the speed of insulin and glucagon and the unexpected and fallible nature of the human. The negotiation and collaboration between the algorithm and the person using it is critical.


We interviewed 15 people who had been in clinical trials with the previous version of the device with various stimuli and probes. We inquired about general topics and specific interactions.

Above images document a card sorting exercise where we learned that the need to know things (like CGM history) with current pumps became nice to know things when the participant was using the iLet. The more important things were

“Because it’s a machine, it’s doing all these things for you – you don’t have to know all of that stuff – it’s neat to see, but you don’t have to know that.”

“The hard part was being honest to what your telling the device. The software could help you be more honest about what your eating…”


We built a web app to prototype the first version of the interface. We used a Yota Phone to test the contrast and the interaction with an eInk screen.

  • Date : 2018 - Present
  • Partners, Clients, Funders: Stanford University and The Helmsley Trust
  • Team : Brian Hoffer, Korey Hood, Sierra Nelmes, Alex, Grace, Laurel, Aly, Upshift, Kelsey, Justus