CHI 2014: Keynote – Scott Jenson of Google

“The Physical Web”

  • Apple, Fro Design, now Google
  • contrast experience of Amazon Whispersync (zero experience, it just works) vs Jabra headphones (waking his wife up in other room)
  • mobile moving too fast to standardize yet, eg pull to refresh is great but won't be here forever; even steering wheels haven't been standardized, need for it will go away, and history is quite varied; dialectic between users and technology
  • shape of innovation: familiarity -> maturity -> revolution; eg DOS -> Lotus 123 -> GUI; lesson 1 – we'll always borrow from the past, lesson 2 – maturity is an intellectual gravity well that's hard to escape
  • limiting factor isn't technology but our own psychology; everyone wants innovation but not risk; not afraid of future but attached to the past
  • Internet of Things changes everything; not a lot of good thinking evidenced in the media today; smart devices – Nest and Quirky Egg Minder, individual functional devices; home automation – everything connected and networked; we need to think about the implications and consequences with all this stuff in combination
  • IoT can't be a set of if/then rules because humans are goofy and do unexpected things
  • Moravec's Pardox: HardEasy (think it's hard but turns out easy like chess) vs EasyHard (think it's easy turns out hard like translation) – home automation is an EasyHard problem; systems that expect us to be human rather than forgetting it
  • smart devices: today each device has its own app; can't sustain that, they don't scale to millions of smart devices
  • just-in-time interaction: use it then lose it, don't need to hold on to it or remember it after your done
  • smartness layers: coodination (whole environment collaborates), control (one device), discovery (things project tiny bits of data); lose apps and we can think small; web needs a discovery service, smart devices project a URL and phone can make them available; “proximity DNS”; URLs are flexible, lightweight, extensible, and standardized
  • “I'm more of a terraforming guy than a VC”; ong term, big change thinker; only 2 kinds of ideas – truck ideas and road ideas; no one wants to build roads right now, just trucks and toll roads; eg Malcom ??? invented and patented cargo container; reduced shipping costs by 26x; gave his patents away to ISO; was even more successful
  • Apple's success has blinded us; we need to discover, invent, and move on to new things; need a physical web
Posted in Interaction Design | Leave a comment

CHI 2014: Interactive Surfaces and Pervasive Displays

Pervasive Information Through Constant Personal Projection by Christian Winkler

  • AMP-D: interactive personal ambient display that projects on floor near device; constant personal projection; course augmented reality; interaction on floor, hand, mobile
  • mobile devices disconnect from immediate environment; this helps reconnect
  • the world as a display: content – static, environment, dynamic, urgent; where – the ground; how – boxes and spheres; when – static, relative to fixed location, with user, with timeouts; interaction – body movement, selection with hand, preview and binary decisions with hand gestures, transfer to/from phone, deselect/remove/snooze; privacy – no projection of private info on floor, only in hand
  • implementation: DLP projector with servo focus, depth camera, inertia sensor, hand/finger tracking; continuous interaction space, continuous information space

Bigger Is Not Always Better: Display Size, Performance, and Task Load During Peephole Map Navigation by Roman Radle

  • dynamic peephole navigation: display is window to larger information space; how small can a peephole be without overburdening for navigation; tablet size seems to be the sweet spot
  • navigation behavior: learning – scan the space, navigation – memory and landmarks for direct access
  • experiment: simulated peephole size on a large display with 3D pointer; navigate to 4 target pins as quickly and accurately as possible with 4 distractor pins; vary peephole size from projector to mobile projector to tablet to phone
  • results: long learning phase time lengths dropped to stable navigation phase; larger peepholes facilitate learning by reducing path length to view information space and better performance; no significant difference in navigation phase performance

Mechanical Force Redistribution: Enabling Seamless, Large-Format, High-Accuracy Surface Interaction by Alex Grau

  • MFR: high density force interaction with low density sensors; arbitrarily large sensor matts at relatively low costs; can be used with many sensor types; scan and interpolate between the forcels (force pixels); resolution dependent on force sensor and space between them
  • cool demo of 121ppi hand sensor; multi-touch and hires position and pressure tracking
  • uses: automotive interiors; display walls; industrial; yoga mat sized for consumers, developers, and researchers – kickstarter later this year, hope to sell for $250 each

Effects of Display Size and Navigation Type on a Classification Task by Can Liu

  • displays getting larger and higher resolution; larger displays promote physical navigation but problematic for some uses such as desktop tasks; previous research hasn't looked at data manipulation tasks
  • is a wall display than a desktop for classification tasks?
  • experiment: abstract classification task; does a wall outperform a desktop in high information density and task difficulty? 12 participants
  • results: desktop worked best on low info density; wall worked much better for high info density
  • why? different number of pick and drop actions? no difference; virtual zoom distortion? no difference; physical move distances? no difference at high density; reach range and trajectories? desktop condenses reach range requiring more pan and zoom, with more restrictions on trajectory, even if using overview or fisheye techniques
Posted in Interaction Design | Leave a comment

CHI 2014: Modeling Users and Interaction

Model of Visual Search and Selection Time in Linear Menus by Gilles Bailly

  • model to understand human performance for target acquisition in realistic menus
  • novice: scan, skip around; intermediate: directed search with some error; expert: directed search with less error or point directly
  • gaze distribution = f(menu organization, menu size, position of target, absent items, expertise); last item effect – last item is slightly faster to select
  • data collection: 40,000 selections for time, cursor position, and gaze position; cursor follows gaze
  • model handles previous findings about menu usage; accurate describes behavior, not a simple model but has 3×8 parameters for a complex task

Towards Accurate and Practical Predictive Models of Active-Vison-Based Visual Search by David Kieras

  • color is a better cue than size or shape but all contribute; want to build a model to predict human performance; built an EPIC model for this task; very good fit to empirical data; EPIC models are complex and hard to develop; want to develop a GOMS model that then can generate a GLEAN GOMS
  • color can be distinguished in a much wider angle than size and shape; focus model on color alone and comes close enough for many situations; useful for model-based evaluation

Understanding Multitasking Through Parallelized Strategy Exploration and Individualized Cognitive Modeling by Yunfeng Zhang

  • in many tasks, multi-tasking is inevitable; computational cognitive models allow study
  • experiment: multimodal duel task; classification + tracking; sound on or off; peripheral (other display) visible or not
  • result: sound helps when peripheral not visible for both tasks; combine even better
  • EPIC model: explore 72 different microstrategies for task switching, with 12 settings, so 864 models; used parallel computation to speed up the simulations, shortening from 14 hours to 20 minutes
  • basic model follows human data closely; can also compare different strategies; human data averages tracks best strategies closely, but individual performance varies widely
  • individualized models fit data well and could find best strategies by comparing best human performers; average performance leads to a match to bottom performing human

How Does Knowing What You Are Looking For Change Visual Search Behavior by Duncan Brumby

  • 2 types of search: semantic vs known-item search; known-item is faster; why are semantic searches slower?
  • accessing facts in our head takes time; is it reflected in eye movements? no, except when tightly packed
  • instead, it relates to the distance between eye jumps; semantic goes item by item, known-item jumps around

Automated Nonlinear Regression Modeling for HCI by Antti Oulasvirta

  • nonlinear regression models: expressive and white-box, like pointing, learning, foraging; hard to acquire these models
  • exploration is inefficient and laborious, so automate it; using optimization techniques from symbolic programming
  • experiment: 11 existing models in literature using same data; improved 7 of 11 models and nearly the same for 4 others; complex data sets come up with complex models; constrain settings; also works with multiple data sets
Posted in Interaction Design | Leave a comment

CHI 2014: Case Studies – Realities of Fieldwork

An Ethnographic Study of South African Mobile Users by Susan Dray

  • [10 minutes of technical problems; harumph]
  • consulting with undisclosed client
  • study from 2008 to inspire ideas on entering African market; interested in mobile devices; broad scope, with tentative ideas in safety and finance; 3 months from first encounter to final report
  • Khayelitsha Township
  • assumptions: rural people are unbanked FALSE; travel long distance on foot TRUE; need to send money by car/bus FALSE
  • 11 families in informal area shacks, formal areas, ADP housing; also 3 families in rural area receiving money; all had basic or feature phones
  • challenges: feasibility – (approvals took a long tome plus other logistics), access to participants (recruiting), localization and translation (Xhosa); logistics; safety; trade-offs;
  • results: identify new product areas; body of knowledge on urban and rural; develop empathy for people at the bottom of the period

Adopting Users' Designs to Improve a Mobile App by Kate Sangwon Lee

  • Haver Corp: make many apps include Line and Naver App
  • many changes to apps over time; for small changes user research is often skipped
  • developed quick and participatory method; 44 users in 3 days including prototyping; cafe study + participatory design
  • method: interview (10 min) -> participatory design (15 min) -> concept evaluation (5 min)
  • Challenges: approaching stranger in cafe (interview 1 or 2 at a time, use cafe cards for pay, keep it short); prototyping (printed background of UI, large enough to record descriptions, colored pencils)
  • results: 1 dominant pattern (frequently accessed functions) and 2 minor patterns (practical info like weather and horoscopes); prototype and test 3 different prototypes
  • strengths: cheap and fast; easily identify subtle needs; visual outputs easy to understand and share; easy to conduct; mobile; multiple domains – mobile, small PC UIs, small hardware products, mobile service concept
  • limitations: small areas of UI, experienced users, no in-depth thoughts, hard to express interaction

User-Centered Design for More Efficient Drill-Rig Control Systems by Katri Koli

  • Leadin Inc (UX firm) working with Sandvik (mining equipment company)
  • open-pit mine surface drilling equipment; drill holes, fill with explosives, blasting
  • precise positioning of drill very important; 6 components, 2 directions of movement, 12 motions, traditionally use 2 joysticks
  • develop automatic positioning mode; easier, faster, more accurate, user acceptance?
  • method: contextual inquiry, iterative prototyping with simulator, usability testing
  • study: 4 operative site visits; winter conditions; focus on hole positioning; 4 users of various experience
  • challenges: restricted environments; recruiting participants through mine site; challenging environment (cabin designed for one operator, researchers behind operator chair, winter clothing even inside, may not be anything going on when there, safety prep, notebooks but possibly not photos or videos); getting enough interesting data (only 2-3 minutes of positioning in 60 minutes of work); working with simulator rather than real world for prototypes and testing
  • collected 800 notes; need a clear research focus; affinity on all, but additional analysis on 1/4 that were about positioning; iterative prototyping and 2 rounds of 6 usability tests with drill rig simulator
  • results: automatic positioning was faster, much more accurate, and easy to learn and use; products will ship this year; methods work with industrial users

Panel Question and Answer Session

  • Q: would participatory methods work in the South Africa study? useful after the field visits when products were being explored
  • Q: how were users compensated? mines: small gifts, caf├ęs: coffee cards worth about $10, need to pay based on local culture and environment
  • Q: did you run into situations where you weren't willing to work with individuals? Korea: hard to approach middle-aged men, mines: no issues, Africa: screener was actually a little too strict
  • Q: did miners worry about effects of automation? increases safety and is more of a supervisory role so helped avoid uncomfortable work situations
  • Q: how did you pick the right users? Africa: worked with local marketing firms
  • Q: why two translators? difficult to translate directly, so played off each other and could also run errands and help deal with situations
  • Q: how did you deal with being from a very different culture? working with locals very important
  • Q: usability test didn't use the same operators? couldn't access actual operators but used company trainers who were familiar with work
  • Q: did you have to consider non-standard conditions or failure conditions? have to be able to get out of full auto mode, still need to teach manual ways
  • Q: how to avoid self reporting bias, an accurate baseline? mines: observe actual work in environment; cafe: many of our team are also app users so piloted with them;
Posted in Interaction Design | Leave a comment

CHI 2014: Plenary – Elizabeth Churchill of eBay Research Lab

Reasons to Be Cheerful – Part 4

  • song: Reasons to Be Cheerful – Part 3
  • HCI is good at thinking of other people's points of view, imagining ourselves as other people; eg users, maker communities; noticing, reflecting, questioning; everything seems to be speeding up, but we must take the time to think and reflect
  • enjoyment comes from physiological needs met, strong relationships, meaningful work/activities, perspective and passion
  • key directions: proactive health and well-being, marketplaces and exchanges, education and self-directed learning, data collection curation analytics experimentation interpretation, internet of things
  • we in the HCI community have responsibility to keep people – as individuals and communities – in our technology systems; don't filter out the human emotions, empathy, culture, physiology, psychology; ensure that technology engenders and taps into joy
  • known problems, known solutions; known problems, unknown solutions; unknown problems, unknown solutions
  • Reflect: 5 things you found here that surprised you in a positive way; 4 new approaches or methods; 3 people you'd like to be in touch with; 2 sub-areas where you are out of your comfort zone you might influence; 1 grand challenge that you can engage in that may change the world
Posted in Interaction Design | Leave a comment

CHI 2014: Decisions, Recommendations, and Machine Learning

Customization Bias in Decsion Support Systems by Jacob Solomon

  • user satisfaction improves with customizability; is it a good design choice for decision support systems?
  • data -> system -> recommendation -> decision maker -> decision
  • some systems support customization; customization -> recommendation quality -> decision quality; is this always true?
  • customization bias: bias because decision maker has a part in driving the recommendation; reduce ability to evaluate quality of recommendation; supports confirmation bias
  • experiment: fantasy baseball; predict scores assisted by DSS; one group could adjust statistical categories used, other couldn't; recommendations were predetermined, no algorithm, and both got same recommendations; subjects received 8 good recommendations and 4 poor recommendations; 99 MTurk participants with fair baseball knowledge
  • findings: customizers had slightly better recommendations, but not the point of the study; customizers were more likely to agree with system; more likely to agree if recommendation was consistent with customization (confirmation bias); customization can enhance trust in system but trust is sometimes misplaced; ties decision making more to quality of recommendation (whether it gives good or poor ones)

Structured Labeling for Facilitating Concept Evolution in Machine Learning by Todd Kulesza

  • data needs to be labeled for machine to distinguish; people don't always label consistently; concept evolution – mentally define and refine concept
  • study: can we detect concept evolution; 9 experts, 200 pages, twice with 2 weeks in between; experts were only 81% consistent with prior labeling
  • can we help people define and refine concept while labeling? added 'could be' choice to yes and no to allow additional refinement later after concept refined; often didn't name the groups, so then provided automated summaries; forgot what they did with a similar page, so automated recommending a group; not sure some pages were worth structuring, so show similar future pages
  • study: 15 participants, 200 pages, 20 minutes, 3 simple categories; conditions of no structure, manual structure, and assisted structure
  • findings: manual structuring created many more groups than automated; also mad many more adjustments in first half of experiment, less later; manual structuring more than tripled consistency and assisted almost tripled; took longer than baseline to label early items, but not longer for later items; preferred structured and assisted over baseline; easier to verify recommendation than to come up with their own

Choice-Based Preference Elicitation for Collaborative Filtering Recommender Systems by Benedikt Loepp

  • recommendation system: select items from large set that match interests; collaborative filtering is most popular and is effective; criticized because focus is on only improving algorithms rather than improving user's role and satisfaction in use; also at beginning have no data to work from; ratings are inaccurate, comparisons are effective, but choosing comparisons depends on preexisting data
  • goal: improve user effectiveness and control; generate a series of choices based on most important factors in a matrix factorization; items must be frequently rated, highly diverse choices, similar in non-choice factors
  • evaluation: balance automatic recommendation and manual exploration; test 4 different user interfaces – popular, manual exploration, automatic recommendation, choice based model; 35 participants using each method to choose six movies + survey
  • results: choice based significantly better than other models in all dimensions but required more effort than popular; good cost-benefit ratio; users felt in control; no profile or additional data required; works well for experience-based products

ARchitect: Finding Dependencies Between Actions Using the Crowd by Walter Lasecki

  • activity recognition: system recognizing what you are doing; eg help people who may need assistance in living; automated systems need a lot of training data, where people can recognize very easily; crowd source from Legion:AR; still many permutations in behavior that must be recorded and labeled
  • approach: define dependency structure to constrain meaningful variations
  • ARchitect: ask.yes/no questions about different permutations of action steps to build valid models; eg 3 videos led to 22 valid models

Scalable Multi-label Annotation by Alex Berg

  • multi-label annotation: identify aspects/objects that are or are not in an image; big in machine vision
  • detect 200 categories in 100,000 images; large set is useful to many areas of research; expensive to scale, so exploit the hierarchical structure of concepts; correlation and sparsity; kind of like 20 questions for MTurk participants
  • how to select the right questions: utility, cost, accuracy
  • results: 20,000 images from set, 200 category labels; accuracy 99.5%+, 4-6x as fast
Posted in Interaction Design | Leave a comment

CHI 2014: Cross-Device Interaction

Smarties: An Input System for Wall Display Development by Oliver Chapuis

  • wall display input issues: mouse/keyboard – stuck to desk; touch – too close; laser pointer, air gesture, hard to track; prefer mobile tablet
  • input programming is complex and expensive; provide a system for easy and fast prototyping; mobile device(s) -> protocol -> library -> wall
  • interface: multiple pucks representing “cursors” at multiple locations on the wall; each puck has functions of a mouse with multi-touch and other functions like text; collaborative; you have your own pucks and share control; store and retrieve pucks
  • protocol and libraries: multi-client event based; synchronization for shared pucks; C++, Java, JavaScript; similar to mouse management but applied to each puck
  • applications: attach pucks to wall lenses for exploration of map
  • software available

Conductor: Enabling and Understanding Cross-Device Interaction by Peter Hamilton

  • vision: the move from physical desk to mobile device has lost some of the utility of large physical desk space; interactions across multiple devices by one user; “symphony” of devices
  • ***** Conductor: targeted transmissions to other devices; cue broadcasting; minimally invasive; contextual actions; persistent connections -> duet functional bonding;'duet management; cross-application views; peripheral device sharing (eg share a Bluetooth keyboard); cross-device task manager
  • user study: large search and integration task given a whiteboard, paper and pen, 5 Nexus 10 and 5 Nexus 7 devices; all used the devices; using spacial memory to store specific pieces of data on individuals devices

Panelrama: Enabling Easy Specification of Cross-Device Web Applications by Jishuo Yang

  • by 2017 the average household will have 4 internet enabled devices; how to take advantage of those
  • automatically reassign UI elements to the available devices based on best device for certain functions (example uses laptop/projector = slide, phone = remote control, pebble watch = presentation time, google glass = presenter notes)
  • panel: UI building block; group of UI components with a shared purpose
  • Panelrama: attributes – scorn size, proximity to user, keyboard, touchscreen; developers score each characteristic for each panel – extensible attributes; Panelerama models attributes of devices; optimize layout across multiple devies for each panel; minimize code changes, just define panel tag and panel definition
  • developer study: 8 developers converted single device apps to multi-device in 40 minutes or less
  • code available soon

Interactive Development of Cross-Device User Interfaces by Michael Nebeling

  • meeting room scenario: parts of UI distributed to speaker tablet, projector, and audience phones; device-centric with different roles
  • classroom scenario: teacher's device and 3 student groups; role centric ie student + group or teacher
  • design time: device types including unknown devices; user roles; adapt UI elements: adapt UI across devices; design and testing; reuse
  • run-time requirements: dynamic integration of devices, update the distribution, matching and adaptation
  • XDStudio: GUI builder for multi-device UI; “DUI” = distributed user interface; on device and simulated authoring modes; user study validated that different modes are preferred for different situations; define distribution profiles for devices, device classes, and roles; client-server architecture
  • evaluation: would people use authoring modes? used the scenarios with mix of mode availability; device-centric scenario benefits more from both modes, where classroom was fine with simulation
Posted in Interaction Design | Leave a comment

CHI 2014: Risks and Security

Easy Does It: More Usable CAPTCHAs by Angelique Moscicki

  • CAPTCHA: block low grade, automated abuse on low risk tasks; many variations in specific features
  • usability measures: accuracy, solving time, satisfaction
  • automatic variations in features and parameters; 97,000 Mechanical Turk participants on 750,000 tests; 5,000 satisfaction surveys
  • findings: users sensitive to font choice, prefer simpler character sets, eg numeric; not sensitive to screen resolution, length; many feature interactions, 20% had nonlinear relationships! user testing required; preference for positive words, digits, and common words; random strings least preferred
  • tested and deployed new algorithm: numeric digits, removed confusion between 1 and 7 and o (oh) and 0 (zero); +6.7% accuracy, -55% reloads, -10% failed

Using Personal Examples to Improve Risk Communication for Security and Privacy Decisions by Marian Herbach

  • 67 million apps downloaded per day on Google Play in 2013; users entrust personal data to devices
  • many people do not understand permissions and get habituation so ignore and just grant them
  • use concrete and personal examples to demonstrate risk; eg show photos that could be deleted or describe explicit risks like viruses or show example contacts
  • study: mockup app with pilot and Mechanical Turk; pick 2-6 apps to install and present permission screen;
  • findings: 14-23% of the time participants chose less-requesting apps or none even after app selected; didn't prevent users from choosing to install at least one app (in most cases); brand and high ratings didn't change decisions; showing personal information created negative affect including paying more attention to real permission screens

Experiences in Account Hijacking by Iulia Ion and Richard Shay

  • account compromise: example Mat Honan; lots of effort for a small goal (twitter handle) on a normal person with devastating impact to that person
  • goal: how to encourage people to use good security practices; experiences and attitudes
  • study: 294 Mechanical Turk participants; 15-30% said they had an account compromised and received different survey
  • findings: accounts are often valuable and used often; attackers unknown and known (effect relationship); harm is concrete and emotional; accept some responsibility for security; incomplete security understanding; 50% notified by others, 30% noticed content, 30% notified by service, 17% locked; 33% had email sent from account; 20% said no concrete harm; most felt negative emotions; 2/3 said it improved their security behavior; most say user and service provider are responsible; often said responsibility related to passwords; services should prevent and inform user of compromises
  • implications: use stories with emotional appeal to drive people to better security behavior; emphasize that there is more to security beyond passwords; services should have good notification mechanisms (alternative channels)

Experimenting at Scale with Google Chrome's SSL Warning by Adrienne Felt

  • active network attack: intercepting traffic between user and server; SSL supposed to protect; if something is wrong with SSL, warning is shown
  • 68% of the time people ignore warning; often annoyed by false warning; but warning could be improved, eg FireFox only has 33% clock through level on their warning; want to stop annoying people and get informed consent
  • study: 17,000 impressions per condition over a week; with FireFox warning in chrome, lower level but still higher than in FireFox; images of people didn't impact, despite expectations from psychology; styling changes had no effect; number of extra clicks has no effect
  • other factors? better headlines and calls to action; separate action buttons physically and make less similar

Betrayed by Updates: How Negative Experiences Effect Future Security by Rick Wash

  • eg police warning at Michigan State about IE security vulnerability
  • most attacks target known vulnerabilities where patch is available; why do people not patch?
  • interviewed 37 non-expert Windows users, mostly grad students (high risk if computer compromised, low cash to replace)
  • findings: don't want unexpected changes to user interface; unused and unrecognized software, like Java; current version already works, why bother, like Adobe Reader
  • ref: Microsoft Security Intelligence Report v13, 2013; browser, Java, adobe account for large proportion of attack vector
Posted in Interaction Design | Leave a comment

CHI 2014: Emotion and Mobiles

Mobile Attachment – Causes and Consequences for Emotional Bonding with Mobile Phones by Alexander Meschtscherjakov

  • 7 billion population, 6.8 billion mobile subscriptions
  • psychology: attachment theory, extended self; consumer research and design: brand attachment, product attachment
  • mobile attachment: cognitive and emotional target-specific bond connecting a person's self and a mobile device that varies in strength
  • model: causes, influences, consequences
  • causes: device-self linkage routes, design space determinants; empowerment -> utility; self (past + private + public +collective) enrichment -> memory + self image + affiliation + world view; self-gratification -> pleasure
  • influences: user (personality, brand history, ownership, etc), environment (ads, narratives, other devices), device (design, functions, quality, etc)
  • consequences: investment of resources and self-image resources, behavioral and emotional responses
  • conclusions: attachment exists, causes and consequences are not mutually exclusive, helpful to investigate theories in different disciplines
  • “I Just Want to Be Your Telephone” song

Hooked On Smartphones: Overuse Among College Students by Uichin Lee

  • smartphone overuse: disrupt social interactions, mental health, sleep patterns; technological addiction – behavioral not chemical
  • goal: identify detailed usage behaviors related to problematic usage
  • study: 95 students for a semester; 36 in risk group; android detailed usage logging tool (unlock, app use, lock, app notifications)
  • addiction scale: interference, virtual world orientation, withdrawal, tolerance
  • findings: risk groups spend more time, more frequently, and longer sessions; top 1 or 2 apps dominate usage, at risk groups show even more skewed; risk groups use more always but especially morning and evening; risk groups use web apps more and possibly communication apps; instant messaging dominant; external triggers (notifications) 450+ per day in risk group; 3 times more web page visits in group
  • problematic usage: feel more compelled to check devices (more anxious), less conscious and structured in usage (less self-regulation)

Influence of Personality on Satisfaction with Mobile Phone Services by Nuria Oliver

  • relationship between personality and satisfaction with devices
  • satisfaction drives sustained consumption, is a focus of marketing, and an important measure in usability; used Big 5 personality dimensions
  • model: relate personality, customer satisfaction, perceived usability, device usage
  • study: 603 participants, young, gender balanced, rural and urban, use phone at least 6 months; call data records, so basic feature phone usage; structural equation modeling (ESM)
  • findings: biggest factor is perceived usability and satisfaction (.48), mostly perceived efficiency; usage negatively correlates with satisfaction (mostly calls and duration); extroversion influences usage; extroversion and conscientiousness effect perceived usability; conscientiousness has positive influence on satisfaction, intellect negative
  • implications: personality-based service personalization, minimize disruptions, be aware of user saturation points and usage/time budgets

Broken Display = Broken Interface? by Florian Schaub

  • 37% of mobile phones damages in 1st 3 months; 23% of iPhones have damaged screens (disclaimer: study by insurance company)
  • 95 photos of damaged screens from Mechanical Turk; image analysis; annotate and code damage; statistical damage analysis; damage topology
  • damage categories: minor, medium, severe; compare to self-reported touch damage, correlates with extent of screen damage but minor screen damage perceive more touch damage
  • 98% continued to use after damage for an average of 5 months, 8.4% more than a year; 70% did not plan to repair; still usable, damage insignificant, financial considerations
  • viewing issues: location, extent, opacity, typing impacted by reading impact, depends on orientation; input issues: tactile sensations, source of injury, UI elements unreachable
  • coping strategies: preventive, viewing, touch and input, calling, interaction; eg ignore or get used to, move content around, be more careful, alternative interaction paths, move to another app
  • design considerations: support scrolling and device rotation, layout and theme customization (eg dark background makes damage less noticeable), alternative interaction paths, adaptive representations when sensing damage
Posted in Interaction Design | Leave a comment

CHI 2014: Studying Visualization

Structuring the Space by Nathalie Henry Riche

  • people often refer to information visualizations as maps; many visualizations use spatial metaphors; picked up on contour lines from topographical maps for data
  • mental model: spatial structure + landmarks; do they help or hinder readability and understanding?
  • user study: hinder ability to find common neighbors? help perform comparison of similar graphs? help revisit nodes? 3 conditions – no structure, grid, contour lines
  • findings: no changes in readability shown, contour lines better than grids for comparison but no difference between grids and control, ensure data sets have salient features like clusters, contour better than grid or control for revisitation even though people thought the grid helped

Highlighting Interventions and User Differences by Giuseppe Carenini

  • investigate user-adaptive visualizations; what to adapt to? when to adapt? how to adapt?
  • evaluate 4 types of highlighting interventions: bold, connected arrows, de-emphasis, reference lines
  • highlighting can layer in relevant information in a complex visualization
  • user study: 62 participants, bar graphs, tasks – retrieve value + compute derived value; look at impact of user characteristics; varied timing of intervention
  • findings: deemphasis is best but bold and arrow worked too, dynamic timing erased edge of deemphasis, more complex task also took deemphasis to parity with bold and arrow, all interventions rated useful, visual working memory related to perceived usefulness of reference line

Evaluating a Tool for Improving Chart and Graph Accessibility by Gita Lindgaard

  • how do blind people form a mental representation of a graph?
  • descriptions must be consistent, order of descriptions follow order of questions, star from oldest data, need to interrogate the graph, vocabulary – x/y axis + up/down
  • iGraph: extracts semantics from excel charts and generates natural language output plus supports interaction commands
  • usability study 1: complex graphs took longer, blind people used twice as many commands as sighted and found it easier to use, all used more command than necessary, skipmwasmconfusing, didn't use where am I command
  • usability study 2 (improved system): graph complexity had no effect, blind users used many more commands still (double check understanding), blind users navigated left more often, sighted start over more often
  • field study: system could handle most of the questions the user had about user's chosen graphs; order of information: title, type, then other info; presentation of graphs were often missing a great deal of critical metadata about the graphs
  • test expert to novice vocabulary usage: iGraph vocabulary mentioned by all participants

Understand Users' Comprehension and Preferences for Composing Information Visualizations by Huahai Yang

  • develop a system to automatically compose a visualization from multiple charts and pick best representation; choice depends on insight you are looking for, eg side by side bar good at extrema identification, lines good at correlation comparison
  • study: describe composite visualizations to discover vocabulary and concepts; mechanical Turk led to 1,500 useful descriptions, then coded them; 4 basic insights – read value, extrema identification, characterize distribution, correlation; all can be used for comparison as well; prioritize insights for different types of charts; value comparison, extrema, and correlation swamp other insights (Zipf function)
  • most preferred: crossed-bar (side by side) except for correlation comparison which prefers crossed-line
Posted in Interaction Design | Leave a comment