IEEE Computer, Robotics, EMBS & GRSS Societies and GBC/ACM

6:30-9:00 PM, Thursday, 1 November 2018

Brown University Watson Center for Information Technology, corner of Waterman St. and Brook St, Providence

Parking is available in a lot across the street from the CIT.

Robotics and Visual Computing Lab tours at Brown University's CS department

PictureofSomething.jpg

Abstract:

1) Robotics: Prof. Stefanie Tellex, colleagues and students a. Demo 1: Virtual reality teleoperation. Visitors put on a VR headset. They see a visualization of the robot's sensor stream and teleoperates the robot in VR to pick up objects and manipulate them. See: https://youtu.be/e3jUbQKciC4 b. Demo 2: Drones. Students from Brown and the PCTA will do a live demo of our low-cost autonomous drone. This is part of our goal to empower every person with a collaborative robot. c. Demo 3: Social feedback. Visitors use language and gesture to point out an object to the Baxter robot. The robot responds by delivering the object.

2) Visual Computing: Prof. Daniel Ritchie and students a. Indoor scene synthesis: learning how to choose and lay out furniture & other objects in indoor spaces using deep neural networks. We have a short narrative video that overviews how the method works, and PhD student Kai can also present a poster about the same project b. Learning visual style compatibility of 3D objects: Given two 3D objects, e.g., assets that might be used for a game / VR application, can a computer quantify how well they would "go together" if used in the same scene? We're training neural networks to do this. One source of data is human judgments about style similarity c. Building a large dataset of articulated 3D models: Existing large 3D model datasets consist of only static geometry. We're building a large dataset of objects annotated with part mobilities (e.g. door handles can turn) for use in VR, robotics, and other applications. d. Visual program induction: Given an image or a 3D model, can a computer infer a high-level program that, when executed, reproduces the input image/model? This ability facilitates interesting 'semantic' edits to the image/shape. We can show results from recent systems that do this in several domains, including converting hand-drawn graph sketches into LaTeX-like programs.

3) Visual Computing: Prof. James Tompkin and students a. Unsupervised Machine Learning-based Image Translation: See yourself transformed into a cat or an anime character using machine learning techniques which automatically learn how to 'translate' between classes of objects in images b. Organizing Databases of Imagery with Interactive Labeling: It is easy to scrape databases of imagery online, but how can we organize these easily when they are unlabeled? We show an interactive labeling system to quickly organize databases of human body geometry or artistic paintings into user-defined criteria. c. Light field Segmentation and Rendering for Image Editing: As smartphones become multi-camera systems, how can we consistently edit images captured with these camera arrays (images/video) d. Machine Perception of Data Visualizations: Can current machine-learning perception systems reason about data visualizations like graphs, and what does this tell us about the pros and cons of machine vision systems and human vision? (images/video)

4) Visual Computing: Prof. Andy van Dam and students a. Vizdom: interactive analytics through pen and touch. Vizdom's frontend allows users to visually compose complex workflows of machine learning and statistics operators on an interactive whiteboard, and the backend leverages recent advances in workflow compilation techniques to run these computations at interactive speeds (joint work with Prof. Tim Kraska of MIT, and van Dam's Ph.d. student Emanuel Zgraggen, now post-doc'ing with Tim at MIT) b. Dash: a pen- and touch-enabled 2D information management system for desktops, slates and large interactive whiteboards. Using unbounded 2d workspaces users can gather documents and fragments from a variety of sources, organize them spatially and hierarchically, annotate them, and hyperlink related content to discover and encode relationships. New insights can be presented via customizable dashboards and slide sequence style presentations.

5) Visual Computing: Prof. David Laidlaw and associates: The YURT, a high-resolution VR facility: it displays over 100 million stereo pixels using 69 full HD projectors driven by 20 nodes of an HPC cluster. The projectors display onto 145 mirrors covering a 360 degree surface including overhead and underfoot. At normal viewing distances, the pixels are smaller than are resolvable by the human retina. Visitors will walk 2 blocks to 180 George St, put on 3D stereo glasses to have an immersive virtual reality demonstration of some science and education projects as well as some applications that are a bit more frivolous. These demos should probably be done in groups of <= 10 visitors and in 20-minute fixed slots, in contrast to the more free-form demos in the CS Department's labs.

This joint meeting of the Boston Chapters of the IEEE Computer, Engineerng in Medicine and Biology (EMBS) and Geographic and Remote Sensing (GRSS) Societies and GBC/ACM will be held in the Watson Center for Information Technology, corner of Waterman St. and Brook St, Providence. RI, which is the headquarters for the CS department on the campus of Brown University. Parking is available in a lot across the street from the CIT.

Up-to-date information about this and other talks is available online at https://ewh.ieee.org/r1/boston/computer/. You can sign up to receive updated status information about this talk and informational emails about future talks at https://mailman.mit.edu/mailman/listinfo/ieee-cs, our self-administered mailing list.

Updated: Oct 18, 2018.