Faculty

DartNets Lab's HiLight System Won Best Demo at MobiSys'15

HiLight is the first system that allows screens (e.g., screens of TVs, laptops, tablets, or smartphones) to talk to camera-equipped devices without you knowing it, regardless of the content shown on the screen. It works even upon the dynamic screen content generated on the fly by user interactions (e.g., gaming, web browsing). It removes the need of showing the gangly barcodes (e.g, QR codes) on the screen, so that the screen can display the content as it normally does, while the communication happens behind the scene, in real time.

This work was presented and demoed at the ACM MobiSys'15 on May 20, in Florence, Italy. It won the Best Demo Award.

Check out the demo video on the HiLight project website: http://dartnets.cs.dartmouth.edu/hilight.

Find out other interesting research projects in DartNets Lab at: http://dartnets.cs.dartmouth.edu.

Dartmouth Readies Students for Cybersecurity Challenges

There are approximately 3.5 billion devices in the U.S. today connected through the Internet—smartphones, laptops, tablets, servers—and by 2020 there will be 45 billion, predicts William Nisen, associate director of the Institute for Security, Technology, and Society (ISTS) at Dartmouth.

“We are going to have machines talking to machines without human intervention, and unless we get the security right we are going to wind up with a huge problem,” he says.

“Today there are about 2 million correctly certified web servers on the Internet, but we don’t have a fully effective way to tell it’s really ‘Amazon’ on the other end,” says ISTS Director Sean Smith. “What will happen when the number of these things increases a thousand-fold?”

Read full story on Dartmouth Now

Computational Design of a "Rocker" Protein Cracks a Decades-old Puzzle

Human cells are protected by a largely impenetrable molecular membrane, but Prof. Gevorg Grigoryan, with a team of collaborators, have built the first artificial transporter protein that carries individual atoms across membranes, opening the possibility of engineering a new class of smart molecules with applications in fields as wide ranging as nanotechnology and medicine. This work, which appeared in the journal Science, is a milestone in designing and understanding membrane proteins. The study was a collaboration between researchers across different universities: Dartmouth's Gevorg Grigoryan,  MIT's Mei Hong, fellow University of California, San Francisco investigators William F. DeGrado and Michael Grabe, and others.

Emily Whiting at TED-x-BeaconStreet

TEDxBeaconStreet gathers a group of thought leaders from a variety of fields to share their intriguing, actionable ideas. This November, Assistant Professor Emily Whiting spoke about her work in the emerging field of computational fabrication. 3D printers are revolutionizing the manufacturing and design industry, allowing us to create shapes of astounding complexity and precision. Emily Whiting explains that the power of digital fabrication goes beyond looks; an untapped potential exists to design not just the shape, but the physical behavior of 3D printed objects. The key is the unprecedented ability to create intricate, hidden interior structures. Prof. Whiting describes how computational methods can exploit fundamental principles of physics to produce these structures, changing the way we design for the world of digital fabrication and helping us re-imagine everyday objects.

Researchers Create New Intelligent Software

Computer scientists at Dartmouth have created artificial intelligence software that uses photos, instead of just text, to locate documents on the Internet, reports The Economic Times.

“By studying results from text-based image search engines, the software recognizes the pixels associated with a search phrase and applies them to other photos without tags or captions, locating them more accurately,” the newspaper explains .

Lorenzo Torresani, an associate professor of computer science and co-author of the study, says that “modern machine vision systems are accurate and efficient enough to make effective use of the information contained in image pixels to improve document search.”

Improving Algorithmic Thinking in 10 minutes

Professor Thomas Cormen posted an answer to the following question on the social media site quora.com: "What can I learn right now in just 10 minutes that could improve my algorithmic thinking?" His answer has received over a thousand "upvotes" and has since been featured on Forbes.com. And so, what can one learn in 10 minutes to improve one's algorithmic thinking? Answer by Thomas Cormen:

It's pretty hard to answer that question without knowing what you already know.  If I had to give just one thing, that thing would be loop invariants.  Understand that when you write a loop, you either implicitly or explicitly use a loop invariant.

A loop invariant is a predicate (a statement that is either true or false) with the following properties:

Screens Talking to Cameras without You Knowing it

Graduate students Tianxing Li, Chuankai An, and Professors Andrew Campbell and Xia Zhou are awarded the Best Paper Award at ACM VLCS'14 for their work on advancing screen-to-camera communication.

We are all familiar with QRCodes: a coded image shown on the screen (e.g., smartphone screens, TVs), which can then be captured by a phone camera and translated into data.

The new system called HiLight removes the need of showing any coded images like QRCode while still enabling data transmissions between screens and cameras. HiLight encodes data into pixel color intensity changes, which human eyes cannot see yet cameras can.

By creating such a hidden communication channel, HiLight opens up new opportunities for smart devices (e.g., smartphones, smart glass) to interact with each other, enabling new interaction design and context-aware applications.

Check out HiLight, other projects and research opportunities in the newly formed DartNets Lab co-directed by Professors Xia Zhou and Andrew Campbell.

 

"Information Security War Room" at USENIX Security

Sergey Bratus and Felix 'FX' Lindner delivered a joint invited talk at this year's USENIX Security Conference. This premier conference brings together attendees from academia, industry, and government.

The talk entitled "Information Security War Room" examined the state of IT security, the implications of the ongoing computer insecurity epidemic for national security and "cyberwarfare", the current misguided attempts of various governments to regulate research into computer attacks, and the strategic options the computer security community has left to revert the current trend of ubiquitous insecurity, and to make practical progress towards computers we could finally trust.

The talk received considerable attention; slides posted online got over 30,000 download requests to date.

CS students phone in their feelings

Much of the stress and strain of student life remains hidden. The StudentLife study led by Professor Andrew Campbell built a smartphone sensing app that 48 computer science students used over 10 weeks of the spring term 2013. It revealed a number of interesting findings.  Researchers found that objective sensing data from the students' phones significantly correlated with academic performance and mental-health, such as, grades, GPA, stress, loneliness, depression and flourishing.

The study captured behavioral trends across the Dartmouth term. For example, students returned from spring break feeling good about themselves, relaxed (i.e., low stress levels), sleeping well and going to the gym regularly. That all changed once the Dartmouth term picked up speed toward midterm and finals, as shown in the plot.

Lorenzo Torresani wins the Google Faculty Research Award

Our own Lorenzo Torresani has won the Google Faculty Research Award. Dr. Torresani aims to use deep learning (i.e., learning of deep networks) to discover compact representations of video that work well for classifying human pose dynamics.

Dr. Torresani proposed to learn semantic primitives to represent human actions in video. The primitives are learned by training deep convolutional neural networks to classify different human pose dynamics. Such learned representation promises to significantly improve the accuracy of video understanding applications, including action recognition, semantic segmentation of video, as well as search and retrieval.

The technical novelty of the approach is twofold:

Pages