CHI 2014: Modeling Users and Interaction

Model of Visual Search and Selection Time in Linear Menus by Gilles Bailly

  • model to understand human performance for target acquisition in realistic menus
  • novice: scan, skip around; intermediate: directed search with some error; expert: directed search with less error or point directly
  • gaze distribution = f(menu organization, menu size, position of target, absent items, expertise); last item effect – last item is slightly faster to select
  • data collection: 40,000 selections for time, cursor position, and gaze position; cursor follows gaze
  • model handles previous findings about menu usage; accurate describes behavior, not a simple model but has 3×8 parameters for a complex task

Towards Accurate and Practical Predictive Models of Active-Vison-Based Visual Search by David Kieras

  • color is a better cue than size or shape but all contribute; want to build a model to predict human performance; built an EPIC model for this task; very good fit to empirical data; EPIC models are complex and hard to develop; want to develop a GOMS model that then can generate a GLEAN GOMS
  • color can be distinguished in a much wider angle than size and shape; focus model on color alone and comes close enough for many situations; useful for model-based evaluation

Understanding Multitasking Through Parallelized Strategy Exploration and Individualized Cognitive Modeling by Yunfeng Zhang

  • in many tasks, multi-tasking is inevitable; computational cognitive models allow study
  • experiment: multimodal duel task; classification + tracking; sound on or off; peripheral (other display) visible or not
  • result: sound helps when peripheral not visible for both tasks; combine even better
  • EPIC model: explore 72 different microstrategies for task switching, with 12 settings, so 864 models; used parallel computation to speed up the simulations, shortening from 14 hours to 20 minutes
  • basic model follows human data closely; can also compare different strategies; human data averages tracks best strategies closely, but individual performance varies widely
  • individualized models fit data well and could find best strategies by comparing best human performers; average performance leads to a match to bottom performing human

How Does Knowing What You Are Looking For Change Visual Search Behavior by Duncan Brumby

  • 2 types of search: semantic vs known-item search; known-item is faster; why are semantic searches slower?
  • accessing facts in our head takes time; is it reflected in eye movements? no, except when tightly packed
  • instead, it relates to the distance between eye jumps; semantic goes item by item, known-item jumps around

Automated Nonlinear Regression Modeling for HCI by Antti Oulasvirta

  • nonlinear regression models: expressive and white-box, like pointing, learning, foraging; hard to acquire these models
  • exploration is inefficient and laborious, so automate it; using optimization techniques from symbolic programming
  • experiment: 11 existing models in literature using same data; improved 7 of 11 models and nearly the same for 4 others; complex data sets come up with complex models; constrain settings; also works with multiple data sets
Posted in Interaction Design | Leave a comment

CHI 2014: Case Studies – Realities of Fieldwork

An Ethnographic Study of South African Mobile Users by Susan Dray

  • [10 minutes of technical problems; harumph]
  • consulting with undisclosed client
  • study from 2008 to inspire ideas on entering African market; interested in mobile devices; broad scope, with tentative ideas in safety and finance; 3 months from first encounter to final report
  • Khayelitsha Township
  • assumptions: rural people are unbanked FALSE; travel long distance on foot TRUE; need to send money by car/bus FALSE
  • 11 families in informal area shacks, formal areas, ADP housing; also 3 families in rural area receiving money; all had basic or feature phones
  • challenges: feasibility – (approvals took a long tome plus other logistics), access to participants (recruiting), localization and translation (Xhosa); logistics; safety; trade-offs;
  • results: identify new product areas; body of knowledge on urban and rural; develop empathy for people at the bottom of the period

Adopting Users' Designs to Improve a Mobile App by Kate Sangwon Lee

  • Haver Corp: make many apps include Line and Naver App
  • many changes to apps over time; for small changes user research is often skipped
  • developed quick and participatory method; 44 users in 3 days including prototyping; cafe study + participatory design
  • method: interview (10 min) -> participatory design (15 min) -> concept evaluation (5 min)
  • Challenges: approaching stranger in cafe (interview 1 or 2 at a time, use cafe cards for pay, keep it short); prototyping (printed background of UI, large enough to record descriptions, colored pencils)
  • results: 1 dominant pattern (frequently accessed functions) and 2 minor patterns (practical info like weather and horoscopes); prototype and test 3 different prototypes
  • strengths: cheap and fast; easily identify subtle needs; visual outputs easy to understand and share; easy to conduct; mobile; multiple domains – mobile, small PC UIs, small hardware products, mobile service concept
  • limitations: small areas of UI, experienced users, no in-depth thoughts, hard to express interaction

User-Centered Design for More Efficient Drill-Rig Control Systems by Katri Koli

  • Leadin Inc (UX firm) working with Sandvik (mining equipment company)
  • open-pit mine surface drilling equipment; drill holes, fill with explosives, blasting
  • precise positioning of drill very important; 6 components, 2 directions of movement, 12 motions, traditionally use 2 joysticks
  • develop automatic positioning mode; easier, faster, more accurate, user acceptance?
  • method: contextual inquiry, iterative prototyping with simulator, usability testing
  • study: 4 operative site visits; winter conditions; focus on hole positioning; 4 users of various experience
  • challenges: restricted environments; recruiting participants through mine site; challenging environment (cabin designed for one operator, researchers behind operator chair, winter clothing even inside, may not be anything going on when there, safety prep, notebooks but possibly not photos or videos); getting enough interesting data (only 2-3 minutes of positioning in 60 minutes of work); working with simulator rather than real world for prototypes and testing
  • collected 800 notes; need a clear research focus; affinity on all, but additional analysis on 1/4 that were about positioning; iterative prototyping and 2 rounds of 6 usability tests with drill rig simulator
  • results: automatic positioning was faster, much more accurate, and easy to learn and use; products will ship this year; methods work with industrial users

Panel Question and Answer Session

  • Q: would participatory methods work in the South Africa study? useful after the field visits when products were being explored
  • Q: how were users compensated? mines: small gifts, caf├ęs: coffee cards worth about $10, need to pay based on local culture and environment
  • Q: did you run into situations where you weren't willing to work with individuals? Korea: hard to approach middle-aged men, mines: no issues, Africa: screener was actually a little too strict
  • Q: did miners worry about effects of automation? increases safety and is more of a supervisory role so helped avoid uncomfortable work situations
  • Q: how did you pick the right users? Africa: worked with local marketing firms
  • Q: why two translators? difficult to translate directly, so played off each other and could also run errands and help deal with situations
  • Q: how did you deal with being from a very different culture? working with locals very important
  • Q: usability test didn't use the same operators? couldn't access actual operators but used company trainers who were familiar with work
  • Q: did you have to consider non-standard conditions or failure conditions? have to be able to get out of full auto mode, still need to teach manual ways
  • Q: how to avoid self reporting bias, an accurate baseline? mines: observe actual work in environment; cafe: many of our team are also app users so piloted with them;
Posted in Interaction Design | Leave a comment

CHI 2014: Plenary – Elizabeth Churchill of eBay Research Lab

Reasons to Be Cheerful – Part 4

  • song: Reasons to Be Cheerful – Part 3
  • HCI is good at thinking of other people's points of view, imagining ourselves as other people; eg users, maker communities; noticing, reflecting, questioning; everything seems to be speeding up, but we must take the time to think and reflect
  • enjoyment comes from physiological needs met, strong relationships, meaningful work/activities, perspective and passion
  • key directions: proactive health and well-being, marketplaces and exchanges, education and self-directed learning, data collection curation analytics experimentation interpretation, internet of things
  • we in the HCI community have responsibility to keep people – as individuals and communities – in our technology systems; don't filter out the human emotions, empathy, culture, physiology, psychology; ensure that technology engenders and taps into joy
  • known problems, known solutions; known problems, unknown solutions; unknown problems, unknown solutions
  • Reflect: 5 things you found here that surprised you in a positive way; 4 new approaches or methods; 3 people you'd like to be in touch with; 2 sub-areas where you are out of your comfort zone you might influence; 1 grand challenge that you can engage in that may change the world
Posted in Interaction Design | Leave a comment

CHI 2014: Decisions, Recommendations, and Machine Learning

Customization Bias in Decsion Support Systems by Jacob Solomon

  • user satisfaction improves with customizability; is it a good design choice for decision support systems?
  • data -> system -> recommendation -> decision maker -> decision
  • some systems support customization; customization -> recommendation quality -> decision quality; is this always true?
  • customization bias: bias because decision maker has a part in driving the recommendation; reduce ability to evaluate quality of recommendation; supports confirmation bias
  • experiment: fantasy baseball; predict scores assisted by DSS; one group could adjust statistical categories used, other couldn't; recommendations were predetermined, no algorithm, and both got same recommendations; subjects received 8 good recommendations and 4 poor recommendations; 99 MTurk participants with fair baseball knowledge
  • findings: customizers had slightly better recommendations, but not the point of the study; customizers were more likely to agree with system; more likely to agree if recommendation was consistent with customization (confirmation bias); customization can enhance trust in system but trust is sometimes misplaced; ties decision making more to quality of recommendation (whether it gives good or poor ones)

Structured Labeling for Facilitating Concept Evolution in Machine Learning by Todd Kulesza

  • data needs to be labeled for machine to distinguish; people don't always label consistently; concept evolution – mentally define and refine concept
  • study: can we detect concept evolution; 9 experts, 200 pages, twice with 2 weeks in between; experts were only 81% consistent with prior labeling
  • can we help people define and refine concept while labeling? added 'could be' choice to yes and no to allow additional refinement later after concept refined; often didn't name the groups, so then provided automated summaries; forgot what they did with a similar page, so automated recommending a group; not sure some pages were worth structuring, so show similar future pages
  • study: 15 participants, 200 pages, 20 minutes, 3 simple categories; conditions of no structure, manual structure, and assisted structure
  • findings: manual structuring created many more groups than automated; also mad many more adjustments in first half of experiment, less later; manual structuring more than tripled consistency and assisted almost tripled; took longer than baseline to label early items, but not longer for later items; preferred structured and assisted over baseline; easier to verify recommendation than to come up with their own

Choice-Based Preference Elicitation for Collaborative Filtering Recommender Systems by Benedikt Loepp

  • recommendation system: select items from large set that match interests; collaborative filtering is most popular and is effective; criticized because focus is on only improving algorithms rather than improving user's role and satisfaction in use; also at beginning have no data to work from; ratings are inaccurate, comparisons are effective, but choosing comparisons depends on preexisting data
  • goal: improve user effectiveness and control; generate a series of choices based on most important factors in a matrix factorization; items must be frequently rated, highly diverse choices, similar in non-choice factors
  • evaluation: balance automatic recommendation and manual exploration; test 4 different user interfaces – popular, manual exploration, automatic recommendation, choice based model; 35 participants using each method to choose six movies + survey
  • results: choice based significantly better than other models in all dimensions but required more effort than popular; good cost-benefit ratio; users felt in control; no profile or additional data required; works well for experience-based products

ARchitect: Finding Dependencies Between Actions Using the Crowd by Walter Lasecki

  • activity recognition: system recognizing what you are doing; eg help people who may need assistance in living; automated systems need a lot of training data, where people can recognize very easily; crowd source from Legion:AR; still many permutations in behavior that must be recorded and labeled
  • approach: define dependency structure to constrain meaningful variations
  • ARchitect: ask.yes/no questions about different permutations of action steps to build valid models; eg 3 videos led to 22 valid models

Scalable Multi-label Annotation by Alex Berg

  • multi-label annotation: identify aspects/objects that are or are not in an image; big in machine vision
  • detect 200 categories in 100,000 images; large set is useful to many areas of research; expensive to scale, so exploit the hierarchical structure of concepts; correlation and sparsity; kind of like 20 questions for MTurk participants
  • how to select the right questions: utility, cost, accuracy
  • results: 20,000 images from set, 200 category labels; accuracy 99.5%+, 4-6x as fast
Posted in Interaction Design | Leave a comment

CHI 2014: Cross-Device Interaction

Smarties: An Input System for Wall Display Development by Oliver Chapuis

  • wall display input issues: mouse/keyboard – stuck to desk; touch – too close; laser pointer, air gesture, hard to track; prefer mobile tablet
  • input programming is complex and expensive; provide a system for easy and fast prototyping; mobile device(s) -> protocol -> library -> wall
  • interface: multiple pucks representing “cursors” at multiple locations on the wall; each puck has functions of a mouse with multi-touch and other functions like text; collaborative; you have your own pucks and share control; store and retrieve pucks
  • protocol and libraries: multi-client event based; synchronization for shared pucks; C++, Java, JavaScript; similar to mouse management but applied to each puck
  • applications: attach pucks to wall lenses for exploration of map
  • software available

Conductor: Enabling and Understanding Cross-Device Interaction by Peter Hamilton

  • vision: the move from physical desk to mobile device has lost some of the utility of large physical desk space; interactions across multiple devices by one user; “symphony” of devices
  • ***** Conductor: targeted transmissions to other devices; cue broadcasting; minimally invasive; contextual actions; persistent connections -> duet functional bonding;'duet management; cross-application views; peripheral device sharing (eg share a Bluetooth keyboard); cross-device task manager
  • user study: large search and integration task given a whiteboard, paper and pen, 5 Nexus 10 and 5 Nexus 7 devices; all used the devices; using spacial memory to store specific pieces of data on individuals devices

Panelrama: Enabling Easy Specification of Cross-Device Web Applications by Jishuo Yang

  • by 2017 the average household will have 4 internet enabled devices; how to take advantage of those
  • automatically reassign UI elements to the available devices based on best device for certain functions (example uses laptop/projector = slide, phone = remote control, pebble watch = presentation time, google glass = presenter notes)
  • panel: UI building block; group of UI components with a shared purpose
  • Panelrama: attributes – scorn size, proximity to user, keyboard, touchscreen; developers score each characteristic for each panel – extensible attributes; Panelerama models attributes of devices; optimize layout across multiple devies for each panel; minimize code changes, just define panel tag and panel definition
  • developer study: 8 developers converted single device apps to multi-device in 40 minutes or less
  • code available soon

Interactive Development of Cross-Device User Interfaces by Michael Nebeling

  • meeting room scenario: parts of UI distributed to speaker tablet, projector, and audience phones; device-centric with different roles
  • classroom scenario: teacher's device and 3 student groups; role centric ie student + group or teacher
  • design time: device types including unknown devices; user roles; adapt UI elements: adapt UI across devices; design and testing; reuse
  • run-time requirements: dynamic integration of devices, update the distribution, matching and adaptation
  • XDStudio: GUI builder for multi-device UI; “DUI” = distributed user interface; on device and simulated authoring modes; user study validated that different modes are preferred for different situations; define distribution profiles for devices, device classes, and roles; client-server architecture
  • evaluation: would people use authoring modes? used the scenarios with mix of mode availability; device-centric scenario benefits more from both modes, where classroom was fine with simulation
Posted in Interaction Design | Leave a comment

CHI 2014: Risks and Security

Easy Does It: More Usable CAPTCHAs by Angelique Moscicki

  • CAPTCHA: block low grade, automated abuse on low risk tasks; many variations in specific features
  • usability measures: accuracy, solving time, satisfaction
  • automatic variations in features and parameters; 97,000 Mechanical Turk participants on 750,000 tests; 5,000 satisfaction surveys
  • findings: users sensitive to font choice, prefer simpler character sets, eg numeric; not sensitive to screen resolution, length; many feature interactions, 20% had nonlinear relationships! user testing required; preference for positive words, digits, and common words; random strings least preferred
  • tested and deployed new algorithm: numeric digits, removed confusion between 1 and 7 and o (oh) and 0 (zero); +6.7% accuracy, -55% reloads, -10% failed

Using Personal Examples to Improve Risk Communication for Security and Privacy Decisions by Marian Herbach

  • 67 million apps downloaded per day on Google Play in 2013; users entrust personal data to devices
  • many people do not understand permissions and get habituation so ignore and just grant them
  • use concrete and personal examples to demonstrate risk; eg show photos that could be deleted or describe explicit risks like viruses or show example contacts
  • study: mockup app with pilot and Mechanical Turk; pick 2-6 apps to install and present permission screen;
  • findings: 14-23% of the time participants chose less-requesting apps or none even after app selected; didn't prevent users from choosing to install at least one app (in most cases); brand and high ratings didn't change decisions; showing personal information created negative affect including paying more attention to real permission screens

Experiences in Account Hijacking by Iulia Ion and Richard Shay

  • account compromise: example Mat Honan; lots of effort for a small goal (twitter handle) on a normal person with devastating impact to that person
  • goal: how to encourage people to use good security practices; experiences and attitudes
  • study: 294 Mechanical Turk participants; 15-30% said they had an account compromised and received different survey
  • findings: accounts are often valuable and used often; attackers unknown and known (effect relationship); harm is concrete and emotional; accept some responsibility for security; incomplete security understanding; 50% notified by others, 30% noticed content, 30% notified by service, 17% locked; 33% had email sent from account; 20% said no concrete harm; most felt negative emotions; 2/3 said it improved their security behavior; most say user and service provider are responsible; often said responsibility related to passwords; services should prevent and inform user of compromises
  • implications: use stories with emotional appeal to drive people to better security behavior; emphasize that there is more to security beyond passwords; services should have good notification mechanisms (alternative channels)

Experimenting at Scale with Google Chrome's SSL Warning by Adrienne Felt

  • active network attack: intercepting traffic between user and server; SSL supposed to protect; if something is wrong with SSL, warning is shown
  • 68% of the time people ignore warning; often annoyed by false warning; but warning could be improved, eg FireFox only has 33% clock through level on their warning; want to stop annoying people and get informed consent
  • study: 17,000 impressions per condition over a week; with FireFox warning in chrome, lower level but still higher than in FireFox; images of people didn't impact, despite expectations from psychology; styling changes had no effect; number of extra clicks has no effect
  • other factors? better headlines and calls to action; separate action buttons physically and make less similar

Betrayed by Updates: How Negative Experiences Effect Future Security by Rick Wash

  • eg police warning at Michigan State about IE security vulnerability
  • most attacks target known vulnerabilities where patch is available; why do people not patch?
  • interviewed 37 non-expert Windows users, mostly grad students (high risk if computer compromised, low cash to replace)
  • findings: don't want unexpected changes to user interface; unused and unrecognized software, like Java; current version already works, why bother, like Adobe Reader
  • ref: Microsoft Security Intelligence Report v13, 2013; browser, Java, adobe account for large proportion of attack vector
Posted in Interaction Design | Leave a comment

CHI 2014: Emotion and Mobiles

Mobile Attachment – Causes and Consequences for Emotional Bonding with Mobile Phones by Alexander Meschtscherjakov

  • 7 billion population, 6.8 billion mobile subscriptions
  • psychology: attachment theory, extended self; consumer research and design: brand attachment, product attachment
  • mobile attachment: cognitive and emotional target-specific bond connecting a person's self and a mobile device that varies in strength
  • model: causes, influences, consequences
  • causes: device-self linkage routes, design space determinants; empowerment -> utility; self (past + private + public +collective) enrichment -> memory + self image + affiliation + world view; self-gratification -> pleasure
  • influences: user (personality, brand history, ownership, etc), environment (ads, narratives, other devices), device (design, functions, quality, etc)
  • consequences: investment of resources and self-image resources, behavioral and emotional responses
  • conclusions: attachment exists, causes and consequences are not mutually exclusive, helpful to investigate theories in different disciplines
  • “I Just Want to Be Your Telephone” song

Hooked On Smartphones: Overuse Among College Students by Uichin Lee

  • smartphone overuse: disrupt social interactions, mental health, sleep patterns; technological addiction – behavioral not chemical
  • goal: identify detailed usage behaviors related to problematic usage
  • study: 95 students for a semester; 36 in risk group; android detailed usage logging tool (unlock, app use, lock, app notifications)
  • addiction scale: interference, virtual world orientation, withdrawal, tolerance
  • findings: risk groups spend more time, more frequently, and longer sessions; top 1 or 2 apps dominate usage, at risk groups show even more skewed; risk groups use more always but especially morning and evening; risk groups use web apps more and possibly communication apps; instant messaging dominant; external triggers (notifications) 450+ per day in risk group; 3 times more web page visits in group
  • problematic usage: feel more compelled to check devices (more anxious), less conscious and structured in usage (less self-regulation)

Influence of Personality on Satisfaction with Mobile Phone Services by Nuria Oliver

  • relationship between personality and satisfaction with devices
  • satisfaction drives sustained consumption, is a focus of marketing, and an important measure in usability; used Big 5 personality dimensions
  • model: relate personality, customer satisfaction, perceived usability, device usage
  • study: 603 participants, young, gender balanced, rural and urban, use phone at least 6 months; call data records, so basic feature phone usage; structural equation modeling (ESM)
  • findings: biggest factor is perceived usability and satisfaction (.48), mostly perceived efficiency; usage negatively correlates with satisfaction (mostly calls and duration); extroversion influences usage; extroversion and conscientiousness effect perceived usability; conscientiousness has positive influence on satisfaction, intellect negative
  • implications: personality-based service personalization, minimize disruptions, be aware of user saturation points and usage/time budgets

Broken Display = Broken Interface? by Florian Schaub

  • 37% of mobile phones damages in 1st 3 months; 23% of iPhones have damaged screens (disclaimer: study by insurance company)
  • 95 photos of damaged screens from Mechanical Turk; image analysis; annotate and code damage; statistical damage analysis; damage topology
  • damage categories: minor, medium, severe; compare to self-reported touch damage, correlates with extent of screen damage but minor screen damage perceive more touch damage
  • 98% continued to use after damage for an average of 5 months, 8.4% more than a year; 70% did not plan to repair; still usable, damage insignificant, financial considerations
  • viewing issues: location, extent, opacity, typing impacted by reading impact, depends on orientation; input issues: tactile sensations, source of injury, UI elements unreachable
  • coping strategies: preventive, viewing, touch and input, calling, interaction; eg ignore or get used to, move content around, be more careful, alternative interaction paths, move to another app
  • design considerations: support scrolling and device rotation, layout and theme customization (eg dark background makes damage less noticeable), alternative interaction paths, adaptive representations when sensing damage
Posted in Interaction Design | Leave a comment

CHI 2014: Studying Visualization

Structuring the Space by Nathalie Henry Riche

  • people often refer to information visualizations as maps; many visualizations use spatial metaphors; picked up on contour lines from topographical maps for data
  • mental model: spatial structure + landmarks; do they help or hinder readability and understanding?
  • user study: hinder ability to find common neighbors? help perform comparison of similar graphs? help revisit nodes? 3 conditions – no structure, grid, contour lines
  • findings: no changes in readability shown, contour lines better than grids for comparison but no difference between grids and control, ensure data sets have salient features like clusters, contour better than grid or control for revisitation even though people thought the grid helped

Highlighting Interventions and User Differences by Giuseppe Carenini

  • investigate user-adaptive visualizations; what to adapt to? when to adapt? how to adapt?
  • evaluate 4 types of highlighting interventions: bold, connected arrows, de-emphasis, reference lines
  • highlighting can layer in relevant information in a complex visualization
  • user study: 62 participants, bar graphs, tasks – retrieve value + compute derived value; look at impact of user characteristics; varied timing of intervention
  • findings: deemphasis is best but bold and arrow worked too, dynamic timing erased edge of deemphasis, more complex task also took deemphasis to parity with bold and arrow, all interventions rated useful, visual working memory related to perceived usefulness of reference line

Evaluating a Tool for Improving Chart and Graph Accessibility by Gita Lindgaard

  • how do blind people form a mental representation of a graph?
  • descriptions must be consistent, order of descriptions follow order of questions, star from oldest data, need to interrogate the graph, vocabulary – x/y axis + up/down
  • iGraph: extracts semantics from excel charts and generates natural language output plus supports interaction commands
  • usability study 1: complex graphs took longer, blind people used twice as many commands as sighted and found it easier to use, all used more command than necessary, skipmwasmconfusing, didn't use where am I command
  • usability study 2 (improved system): graph complexity had no effect, blind users used many more commands still (double check understanding), blind users navigated left more often, sighted start over more often
  • field study: system could handle most of the questions the user had about user's chosen graphs; order of information: title, type, then other info; presentation of graphs were often missing a great deal of critical metadata about the graphs
  • test expert to novice vocabulary usage: iGraph vocabulary mentioned by all participants

Understand Users' Comprehension and Preferences for Composing Information Visualizations by Huahai Yang

  • develop a system to automatically compose a visualization from multiple charts and pick best representation; choice depends on insight you are looking for, eg side by side bar good at extrema identification, lines good at correlation comparison
  • study: describe composite visualizations to discover vocabulary and concepts; mechanical Turk led to 1,500 useful descriptions, then coded them; 4 basic insights – read value, extrema identification, characterize distribution, correlation; all can be used for comparison as well; prioritize insights for different types of charts; value comparison, extrema, and correlation swamp other insights (Zipf function)
  • most preferred: crossed-bar (side by side) except for correlation comparison which prefers crossed-line
Posted in Interaction Design | Leave a comment

CHI 2014: Sensemaking and Information in Use

Odin: Contextual Document Opinions on the Go by Joshua Hailpern

  • Odin: mobile solution to get through hundreds and thousands of docs quickly; UrLs, Google News, upload zip, streams like RSS; finds most relevant, most aligned, most divergent; does an executive summary based on sentence scoring; can go to statement in context; can get summary on any document
  • algorithm: topic modeling (rank order of topics) -> sentiment detection (sentence diagramming) -> aggregation (weighted distribution on ranked keywords)
  • user studies: pilot Odin vs Google News – preferred Odin all tasks, core process really solved the problem, extend for domains; comparative study Odin vs RevMiner vs Google News – choose own doc set then summarize, Odin and Google rated high on SUS, Odin had high value added to work, all participants said Odin was the best, summary is powerful

Monadic Exploration: Seeing the Whole Through the Parts by Marian Dork

  • when working with networks we can see micro (one node) or macro (the network as a whole); visual exploration between part and whole
  • monads: point of view on all entities taken severally and not as a totality; neither whole nor part, but a single element's perspective
  • principles: having (relational aspects), difference (distinct position), movement (navigate overlapping perspectives); could lead to many approaches; treat elements as vantage and navigation points + elastic layout, show difference! integrate search
  • current visualization puts monad detail at center, other elements in brief in ordered circle around at a distance based on relevancy and transparency for more distant relationships possibly as just a dot
  • case study: people found it valuable to draw themselves into the content of the network (based on highly cross-linked book on activation)

Photographing Information Needs by Zhen Yue

  • role of photos in data collection; ESM (experience sampling method) – use of photos for jogging memory to be less disruptive in actual moment in data collection
  • collect qualitative data periodically and optionally add a photo; end of each day, send a survey to ask for elaboration including photo to trigger memory
  • findings: 1/3 used at least one photo, women more likely to share photo, older people more likely to share photo; fewer photos shared after first day; photos led to more complete surveys; photos led to higher quality responses, but interrupted work for longer responses but didn't interfere with ease of use ratings; 1/3 of photos were useful and relevent to researchers, especially for clarification and disambiguation

Design Insights for the Next Wave Ontology Authoring Tools by Markel Vigo

  • ontology: logical actions that represent a field of interest; very complex and authoring is complex; very large; semantics, reasoning, inference; applied in critical domains like health; tools with poor utility
  • need to improve tools because ontologies are being used more widely including by amateurs
  • interviewed 15 ontology authors in different fields
  • recommend: provide overviews of hierarchy and complexity, provide filtering, increase situational awareness, bulk entry of large numbers of elements, retrieve from external ontologies, intelligent reasoning, evaluation features

The Role of Interactive Byclusters in Sensemaking by Maoyuan Sun

  • how to find relationships between elements in a large body of documents; visual analytics is useful
  • bicluster: cluster by two attributes simultaneously
  • Bixplorer: tool to help with interactive biclusters
  • user study: task – identify possible terrorist plots; 15 participants with no prior experience
  • findings: most started with biclusters, most found relevant documents and abandoned irrelevant docs; 1/2 of interactions was to find relevant docs; indicate potentially important entities; 1/2 of users created custom layouts using biclusters to label data
Posted in Interaction Design | Leave a comment

CHI 2014: Interactive Whiteboards and Public Displays

Communiplay: A Field Study of Public Display Mediaspace by Jorg Muller

  • multiple interactive displays linked across varied public spaces; how will people interact with displays in public media spaces?
  • key metric: conversion rate – percentage of people that start interacting; also explore the honey-pot effect
  • Communiplay: 6 locations in public parts of buildings; six conditions including fake users
  • observations: 1,234 interactions out of 30,888 passers; honey pot effect exists, local and remote, and more participants leads to more; local is much stronger; fake users didn't show differences; interaction duration increases with more users; play together and with objects, waving, punching/kicking, mimicking; ghost effect – remote passer by local turns around; landing effect – pass then return and do a small interaction

P-LAYERS – A Layered Framework for Public Displays by Nemenja Memarovic

  • many more interactive displays today, and growing; people like them especially when well executed; what are the design attributes for a successful one?
  • 3 different installations with different attributes
  • hardware layer: use same hardware in lab and production, communicate affordances on screen, hardware failures and support issues
  • system architecture: consider scalability, keep up with 3rd party APIs
  • content: user-generated and auto-generated perform the same, keep content fresh and relevant
  • system interaction: skipped for time
  • community interaction design: communicate value prop to the user, avoid effects of competition, guilt, and other negative impacts
  • interplay between layers, self-reflection on individual awareness and interests, tabulating issues, understand effort required at each layer

Posting for Community and Culture: Design of Interactive Digital Bulletin Boards by Claude Forth

  • what types of content are on non-digital boards, how do they impact the community? classify postings
  • 59 bulletin boards, 1,297 postings
  • findings: geographic relevance, contextual relevance, aesthetic aspects; postings were highly local; postings highly related to the purpose of the place where board is located such as cultural or entertainment; more personal ads on outdoor and commercial boards; affordances are important to cause action; tangibility and texture; lighting and contrast; controlled boards were much more neatly arranged than uncontrolled; empty boards stay empty, messy ones change often; spill beyond board surface; modern spaces don't have boards, so no community or ownership; retail shops have boards with their own identity
  • conclusion: physical and local instead of virtual and global

I Can Wait a Minute: Optimal Delay Time for Public Display Content by Miriam Greis

  • many boards don't yet have user-generated content yet; often only in university settings
  • does it need moderation? risk of innapropriate content; what delay would be tolerated and what effects does it have?
  • expectations: 83% of participants think moderation is needed, but expect to appear instantly; without moderation, willing to wait 1 minute, with informed of moderation 60% would wait longer
  • research app: display 12 recent tweets to handle; no notice of moderation; 0, 30, or 90 second delays; 519 messages from 95 users
  • findings: longer delays led to less posting, but didn't affect likelihood of additional postings
Posted in Interaction Design | Leave a comment