Sessions is temporarily moving to YouTube, check out all our new videos here.

React-ing htmlFor=empathy

Jana Beck speaking at React Vienna in September, 2017
17Views
 
Great talks, fired to your inbox 👌
No junk, no spam, just great talks. Unsubscribe any time.

About this talk

An overview of divingbell.io, a conceptual prototype of partnerless partner-assisted scanning via webcam-based eye tracking.


Transcript


All right, so my name is Jana Beck. I respond to Yana, so if you get confused and call me Yana, that's totally fine. I live in California. I grew up in Minnesota in the Midwest of the US, and then I lived for 10 years on the east coast in New York City and Philadelphia, so I'm kind of all over the place, and my family is too, actually, so. If you're curious about that kinda stuff, feel free to chat with me later. The slide deck is online if you wanna follow along. But it's a pretty small room in here, so you shouldn't have trouble seeing the code. I'm jebeck on GitHub, and the code for the project I'm gonna tell you about is there. So you can find it there. And I'm also jeback on a bunch of different Slacks. And I'm iPancreas on Twitter. And I'm a data visualisation engineer is my main job, but I'm not actually gonna talk about data vis tonight, but feel free to chat to me about it if you want. I work at a company called Stitch Fix, which would take a while for me to explain to you what we do, and it's also a US only company right now, so I'll just skip it. But I work in the data science group as a data vis engineer supporting data scientists with all their data vis needs. So if you're interested in like what that might look like, there is a link here, and you could check it out later on the link to the slides explains what we do in the data science division. This talk, though, is about a personal project. It's not about data vis. And so I'm gonna start with this little story, and it starts with this, which is a treadmill. I'm sure a lot of you knew that's what that is. Right, it's a treadmill. I'm guessing a lot of you don't know for what purpose treadmills were first invented so this is the story we're gonna use to set the context for this talk which also it's cool that there's an accessibility talk later so I'll give you the hint that this is sort of accessibility related but not directly an accessibility project. So this is one of the first treadmills. This etching was published in a London newspaper in 1817 and uh, it's at Brixton prison in near London. And it was invented by this guy his name is William Cupit. And basically he was inspired by this sight of idle prisoners to create treadmills as a way of it inspiring in them the habits of industry, so to speak. And, this was something that was actually pretty popular in the Victorian day. This idea of atonement through hard work and the fact that at first they didn't even have these treadmills hooked up to anything so they were literally grinding air, they weren't milling anything with the treadmills and that was seen as a more pure form of punishment, so kind of strange. So this is what they looked like again. They were not what today's treadmills look like. They were wide, they were kind of more like what we call in the US a stairmaster, cuz you were kind of going up rather than just forward, and they were big enough for all these prisoners to stand next to each other while they were doing this. Although sometimes they had these barriers between so that each prisoner was isolated from the other one. And this was super hard work they were usually on these, if you were sentenced to hard labour on a treadmill you were on one for about six hours a day. Which was the equivalent of three to four thousand metres of elevation, and oh this version of the slide deck, sorry, has everything in feet. This is, for context, this is actually a mountain called Ben Lomond which is named after a mountain in Scotland but this is in Northern Utah where my dad lives. And, it's pictured from his house. And we've climbed up there a few times and it's about I think this is about the equivalent of 1200 metres of elevation from his house to the top there, so that's, less than what somebody would be doing on a treadmill doing it for six hours a day in one of these Victorian prisons. Just for a little bit of context to imagine what that was like. So, again, this is what we have today as a treadmill. It's something that people voluntarily use in their free time, now. Maybe you even pay money to go use one at the gym, right and then, to represent the final step in the evolution of the treadmill, we have this right. So, treadmills have come a long way. So file this story away a little bit. The moral of it basically is that the inventor is dead literally in this case. And that technologies that we have all around us have these histories and contexts sometimes we don't know about them but those histories and contexts aren't definitive. So again, file that story away now we're gonna move on I'm gonna tell you a little bit about the project that I'm here to talk to you about today. And that project starts with this guy. His name was Jean-Dominique Bauby he was the editor in chief of Elle magazine, he's French. And on December 8, 1995 he suffered a massive brain haemorrhage. So he fell into a coma, he woke up about a month later in January of 96, and he was completely paralysed. He was able to move his, like wiggle his head a little bit, and he could blink one eyelid. Because his other eye had to be sewn shut cuz it wouldn't close properly. And yet, he dictated memoirs just by blinking his left eyelid. He described in these memoirs what his situation was like. He said you survive, but you survive with what is so aptly known in, as locked-in syndrome. Paralysed from head to toe the patient, his mind intact, is imprisoned inside his own body unable to speak or move. In my case, blinking my left eyelid is my only means of communication. And so by blinking his left eye many many times he dictated these memoirs. They were published as the Diving Bell and the Butterfly. And then he died of pneumonia, unfortunately, two days after the memoirs were published in March of 1997. So the method by which he was able to dictate his memoirs just by blinking is called partner assisted scanning. And he described this in his memoirs as well. He said it's a simple enough system. You read off the alphabet, ESA version, not ABC, until with a blink of my eye, I stop you at the letter to be noted. The manoeuvre is repeated for the letters that follow so that fairly soon you have a whole word and then fragments of more or less intelligible sentences. That at least is the theory. The jumbled appearance of my chorus line stems not from chance but from cunning calculation. More than an alphabet, it is a hit parade in which each letter is placed according to the frequency of its use in the French language. This is what the frequency order in the alphabet is that he used, or what that looks like for French. And he described a lot of how what it was like communicating with different people in this method. He worked primarily with a speech therapist and then with the woman who is pictured back here who he dictated his memoirs to. But he communicated with a lot of folks this way, anyone who'd visit him. And he said nervous visitors come most quickly to grief. They reel off the alphabet tonelessly at top speed, jotting down letters almost at random. And then seeing the meaningless result they exclaim, "I'm an idiot" but in the final analysis their anxiety gives me a chance to rest. For they take charge of the whole conversation providing both questions and answers and I'm spared the task of holding up my end. Reticent people are much more difficult. If I ask them how are you they answer fine, immediately putting the ball back in my court. With some, the alphabet becomes an artillery barrage and I need to have two or three questions ready in advance in order not to be swamped. All right I'm gonna skip some of the long quotations that are still there because, those are killing my voice clearly. Yeah, so, he published these memoirs that he dictated by blinking his eyelid and the project that I'm here to talk to, well, the diving bell and the butterfly, right, it's a metaphorical title and the butterfly is sort of the positive side of his condition. You know he describes in his memoirs an extraordinary life in the mind and the diving bell is the other side of that. The diving bell was really old, this is, you know like some ancient art depicting one. It was a pre scuba diving technology that was a big bell shaped enclosure that trapped air and the diver inside so you can imagine this very claustrophobic-seeming and that's what he compared his condition to on the sort of negative side of it. And, so I, I came across this story through the, there was a 2007 film adaptation of his memoirs and when I came across that it was like a late night Netflix browsing, as you do, and you know I had no idea what this movie was about but I ended up watching it and it just stuck with me so the project that I'm here to talk to you about, if I can continue to talk, is a little app that I made that's a prototype of dictation just by using blinking to select letters. So I've called this partner-less partner assisted scanning because I've basically replaced the partner in the partner-assisted scanning with the web application. And I've done it, not to be completely faithful to how, what his experience was which was blinking with one eye, because not all of us are super gifted in the blinking department or the winking department, so, I just kind of did blinking with two eyes. Now what makes this possible is the library called WebGazer dot js. This is an open source library, in fact it's a copy left licence library, it's licenced under the GPL and it comes out of Brown University's human computer interaction lab in the US and there are computer scientists there that still maintain it, and they also collaborate with a few contributors from the Georgia Institute of Technology. So, just to show you a little bit of how this library works, it's limited right now to being included by a script tag there's a long-running poll request that's been open to modernise it into common JS modules but it hasn't been finished yet. So right now you just have to include it with the script tag. And so this is a little just snippet of how you get started with it. You use the set gaze listener method, to give it a call back, to call and every time that callback function is fired, you'll get, a data object which includes the x and y coordinates of predicted gaze location on the screen relative to the viewport and then a time stamp as well. And then you just call this begin method and that's it, it's really simple to get started with. So integrating it with reacts is a little weird since you have to load it with a script tag and many of us are much more used to using web pack, or similar things but basically you can just you know, load it in a script tag. It took me awhile to find that you need to do it in the body versus the head for the main library and that has something to do with how they're takin advantage of Web GL and you need the whole page loaded before you execute the script. And I'm also loading one piece of it in the head which is they allow you the option to load there's several different regression modules for doing the analysis of the webcam images that provides the eye tracking. And you can put one of them in a web worker for improved performance and it actually improves by quite a bit, so I load that in the head. And so if you haven't used web workers before they're a wonderful little tool for putting javascript in basically a separate thread in the browser and then it doesn't lock up the UI and since there's a lot of analysis happening on the webcam images in this case, that's really advantageous. So that's kind of the nuts and bolts of how to integrate it with React, like how to load it in a React app, but to get into the details in a code sense, how do you integrate it with a React component, in that case here it's all about the life cycle. So if we look at a simple WebGazer component it'll have to be a component class rather than a pure functional component because you need the life cycle methods. We start with an initial state. In this example we'll just start with X and Y as null, that'll be the x and y location of your predicted gaze target on the screen. And blinks is zero, obviously this prototype needs to have blinks so, we're not gonna talk about today, although if we have a little of time and you guys are interested I can show you from another presentation about this but it's less React focused how I pulled blank data out of sort of the underlying data in WebGazer. It's not, blink data isn't actually surfaced in the public API at the moment. So for now that's gonna be all handwaving. In component amount that's where you do all the set up that I just showed you, setting the gaze listener instead of logging it, which was what was on the screen in the last example, now I'm just calling set state to put the blinks in the X and Y. And again, this is magic hand waving because the blink data isn't publicly accessible. And then just rendering x and y and whether you've blinked at least once or not. So as I've learned giving this talk, this is a recorded gif because doing live demos of eye tracking and stuff doesn't work very well. It's very very sensitive to lighting conditions. So the null part is when it was starting, obviously right when I pressed record, and then started the component mounting and then you can see pretty quickly that it starts giving predictions of where you're looking and that once they've blinked the text changed. You also wanna clean up after yourself when you're using this so the component will unmount clearing the gaze listener and then calling an end method is what you need to do. And then there's one final piece to the API that can be fun which is the show prediction points. So just before the call to begin or in between setting the gaze listener and calling begin, you can pass true into this show prediction points, and and then pass false when you're cleaning up. And then you get this big magenta circle. This is not what it actually looks like in the library normally but I made it really big and bright in a custom build for this presentation so that people can see it. And you'll see that my cursor is moving here, too, when I made this recording and that's actually because WebGazer uses cursor movements and clicks under the assumption that you're often looking where your cursor is, so it uses those to train the analysis that it's doing on the webcam images and so, for that reason I also don't view this project as like a true accessibility project it's more of a proof of concept or a conceptual prototype of what you can do with web technologies, because right now, you can't build an accessible interface for really getting going with this because you have to train the analysis using cursor movements and clicks which are not accessible for someone who's completely paralysed. So, that's the story there. So now let's look at how I actually implemented this. To start with on a conceptual level I told you that I'm basically replacing the partner in partner assisted scanning with the web interface so let's just think about what does the partner do in that communication technique. And basically the first part of the partner's role is to display a reference of the entire frequency ordered alphabet. Something like this. In the english version looks like this. And I have another slide somewhere else of the German version sorry. Not in this deck. Then the partner's role, the second part of the partner's role is to loop through all the letters basically by pointing to them and reading them out loud. And then the third part is to record the letters that the user selects of course. So if you've actually literally sketched this what you might have is something like this with the entire frequency ordered alphabet kind of displayed on the right side there the current letter on the left and also highlighted in the frequency ordered display and then the, selected letters in some kind of an input at the bottom. So now let's look at this in terms of actual React. Woops. Sorry. So, again, we're gonna be using the life cycle so we'll call, we'll have this be a class. And then the frequency ordered alphabet is really a constant in the app so it belongs as a default prop. It's never gonna change, you could get more complicated than this and probably should with something like React I 18 Next to internationalise it and have different alphabets available but always frequency ordered. And then the state is gonna be a current letter and the selected letters. So, when we get to the render method we're basically gonna have three components that line up with those three boxes we saw in the UI sketch which is a current letter display, the frequency ordered alphabet, and the selected letters. So now let's get into a little bit more code. So loopin through the letters implies a start, a pause and a reset so we really left a few things off that sketch earlier which is some buttons that you need. So well now we'll see that we the easiest way to do that looping of course is to use an index on that array so we'll have a current index and I've left the current letter in here but of course you wouldn't really need that anymore cuz the index is enough. You don't, the current letter is really derived state from the index and the default prop of your alphabet. And then also adding a started flag here that let's us know what state the app is in in terms of the looping action. So, there's adding the buttons that we need in the render method so this is kind of what the whole app looks like. I've added one thing here which is the letter selector. And that you could, well, for that we'll need a select letter method on instance method, on this class, so that would probably look something like this. Where you're resetting the current index after the letter gets selected and pushing the letter you selected into the selected letters array. Oops. Then the letter selector component is all gonna be about the life cycle method just like we saw earlier with the WebGazer simple react component. So you could actually do this in the life cycle methods of that scanner top level component. Because we haven't actually used any of those yet but I think it makes the, that component a bit too complicated and it's easier to encapsulate it. So here we'll do very similar to what we did before. In component did mount, set up WebGazer, and I'm hand waving a little bit more here with, it would be a lot of, well too much code to fit on a slide detecting two blinks while the same letter's active is how you select a letter. So, just put that in a little bit of magic for now. And then, call that instance method that we passed down as a prop. And component will receive props, this is how we know whether or not they're, how we react to whether or not the looping is actually happening with the starting and pausing of the looping through letters. So webgazer has a pause method, so pause and resume as needed according to that top level state that gets passed down to this component as a prop, the start and flag. Component will unmount, again just clean up like we've seen before. And then we render nothing in this one. This component is all about the life cycle but so again, that's kind of weird maybe but it's, keeps it nicely encapsulated all the interaction with web gazer is happening in just this little component. So here's, here's the finished product. The prototype. A recording of me spelling something. Notice how the cursor doesn't move from the start button. Don't worry, this is, I spell a pretty short word so you don't have to sit here forever and wait for it. The first selection happens here with two blinks. With an h and I don't know if that's big enough to read. And then the second one right here. With an I. So, just spelling hi. Without using any hands. So now we're back to this, if I hope you've been sitting here wondering what the whole story about treadmills had to do with any of this. And, what it has to do with this is that, I embarked on this project as an exercise in empathy, you know, having watched that movie of the diving bell and the butterfly and really feeling like I wanted to understand what it would be like to communicate that way but what I got out of the project in the end, I definitely got that out of it, this empathy exercise but I got more out of it, which is that using web gazer really changed the way that I think about another piece of technology which is the webcam. I think the webcam kind of occupies a pretty low, public profile right now, chat roulette did a lot to destroy its reputation, and, and then it's totally seen as a vector for spying and hacks and all sorts of things like that. I love this article that I found it just skips over the should you cover your webcam question and goes right to like, what's the best way to cover your webcam so that feels representative of the kind of public opinion on webcams today. But when I was researching this I found this, it's a little hard to see on this background but this is a picture of a coffee pot and this is one of the images from the very first webcam, arguably, which was installed on the, the pre world wide web network at, like, the computer lab in Cambridge in England and the reason for this was that a bunch of engineers in this building are computer scientists had a coffee club and they had this communal coffee pot, pictured here, but some of them were like several floors and staircases away from the coffeepot and they would get really annoyed if they had to walk three flights of stairs and then find that the coffee pot was empty. So they built this camera and put, pushed the images on their local network to take pictures of the coffeepot every five minutes so they could always check and see if there was any coffee. So, you know, we have a contrast here between treadmills which started as this instrument as hard labour in prisons like instrument of punishment and now they're something we voluntarily use today in our free time and webcams which started as this amazing invention to improve human caffeine system dynamics efficiency and now are kind of on something that many of us actually disable voluntarily by taping over them on our computers. So, this brings us back to this idea that technologies have histories and context but those histories and context are not definitive and we shouldn't let them be, I think. We should broaden our horizons a little bit in how we think about the kinds of components that can go into what we build. So that's it, basically, here's some references and resources so this app, the little prototype is at diving bell dot io, I encourage you to try it out. It takes a very zen mindset to spell long things. Be careful that your face is well lit and it also doesn't work super great for people depending on what kind of glasses you're wearing, so be forewarned on that. And yeah, a couple other links. And yeah, here's the link. And also, so if we have time for a little bit of Q and A, I do have a few slides that I can pull quick to show you a little bit of how I got the blink data out, so let me know if that's something you wanna see.