Date: Thursday, May 3rd, 2007
Time: 11 am— 12:15 pm
Place: ECE 118
Baruch Awerbuch
John Hopkins University
Abstract:
The purpose of P2P reputation systems is to enable peers to leverage
experience of other peers, whose taste and integrity are questionable. For
example, electronic commerce engines like eBay depend heavily on reputation
systems, which allow a peer to check the reputation system before any
transaction to avoid dealing with un-reputable peers.
Other systems like Amazon and Netflix provide recommendation on movies and
books based on experience of other peers with widely varying taste.
We show that a "peer group" of honest peers with same or similar taste can effectively cooperate by essentially filtering out the inputs of irrelevant or malicious peers, even if the above peer group is in the minority. That is, the net effect is as if the members of the peer group would know in advance who are the other members of that group.
This holds for any behavior pattern, and in particular for completely adversarial behavior of the peers outside the peer group. The talk will review some briefly some of the past work in this area.
Baruch Awerbuch is a professor of Computer Science at the Johns Hopkins University. He received his Ph.D. in 1984 from the Technion, Haifa. From 1984 till 1994 he was at MIT as post-doc, research associate, and faculty at the Mathematics department. His research interests are in the areas of algorithms, wireless networks, learning, security, and peer-to-peer systems with the emphasis on algorithmic approaches to real systems and networks.
Date: Friday, April 27th, 2007
Time: noon — 1:15 pm
Place: ARTS Lab Theatre
Ken Perlin
Media Research Lab, NYU
Abstract:
Many years ago Walt Disney spoke of the quest to create the "Illusion of
Life". In fact in every era of human history, this quest has evolved
new
kinds of literacy, from the first cave paintings to the written word,
music, drama, cinema, animation and beyond. Recently it has become
possible to create this illusion interactively. But what makes for an
effective experience, once an audience can respond back? What makes us
care about an interactive character? We will show some recently developed
techniques for breathing life into interactive characters. These
techniques may point the way to a new era where cinema intersects with
interactive narrative and on-line community.
And we're also going to make sheep waltz.
Bio:
Ken Perlin is a professor in the Department of Computer Science at
New
York University, He was founding director of the Media Research Laboratory
and also directed the NYU Center for Advanced Technology from 1994-2004.
His research interests include graphics, animation, user interfaces,
science education and multimedia. In January 2004 he was the featured
artist at the Whitney Museum of American Art. In 2002 he received the NYC
Mayor's award for excellence in Science and Technology and the Sokol award
for outstanding Science faculty at NYU. In 1997 he won an Academy Award
for Technical Achievement from the Academy of Motion Picture Arts and
Sciences for his noise and turbulence procedural texturing techniques,
which are widely used in feature films and television. In 1991 he received
a Presidential Young Investigator Award from the National Science
Foundation.
Dr. Perlin received his Ph.D. in Computer Science from New York University in 1986, and a B.A. in theoretical mathematics from Harvard University in 1979. He was Head of Software Development at R/GREENBERG Associates in New York, NY from 1984 through 1987. Prior to that, from 1979 to 1984, he was the System Architect for computer generated animation at Mathematical Applications Group, Inc., Elmsford, NY, where the first feature film he worked on was TRON. He has served on the Board of Directors of the New York chapter of ACM/SIGGRAPH, and currently serves on the Board of Directors of the New York Software Industry Association.
Date: Thursday, April 24, 2007
Time: 11 am— 12:15 pm
Place: ECE 118
Martin Burtscher
Cornell University
Abstract:
High-end microprocessors are poorly utilized because they often have to wait
for data to be delivered. This talk presents several novel hardware and software
ideas that alleviate this inefficiency by reducing the time it takes to access
the data. First, we briefly discuss two latency-hiding techniques, load-value
prediction and source code scheduling. Second, we present an innovative multicore
approach called Future Execution to lower the access latency. Future Execution
pairs up cores and transparently and continuously converts the running program
thread into a helper thread that is streamed into the alternate core. Third,
we introduce a high-speed lossless compression algorithm to decrease the data
size in real time. We conclude the talk with an outlook into the future.
Bio:
Martin Burtscher is an assistant professor in the School of Electrical and
Computer Engineering at Cornell University. His research focuses on architecture
and code optimization techniques to improve the data access efficiency of high-end
microprocessor systems. He received the combined BS/MS degree in computer science
from the Swiss Federal Institute of Technology (ETH) Zurich in 1996 and the
PhD degree in computer science from the University of Colorado at Boulder in
2000. He is a senior member of the ACM and the IEEE.
Date: Thursday, April 19th, 2007
Time: 11 am— 12:15 pm
Place: ECE 118
Ethan L. Miller
University of California at Santa Cruz
Abstract:
Modern archival storage systems either store data in the clear,
ignoring security, or rely on keyed encryption to ensure privacy.
However, the
use of encryption is a major concern when data must be stored an
indefinite period of time - key management becomes increasingly
difficult as file lifetimes increase, and data loss becomes
increasingly likely because keys are a single point of failure and
losing a key is
comparable to data deletion. Moreover, traditional systems are
subject to the obsolescence of encryption algorithms themselves,
which can
expose petabytes of data the instant a cryptographic algorithm is
broken.
To address these concerns, we developed POTSHARDS, an archival storage system that addresses the long-term security needs of data with very long lifetimes without the use of encryption. POTSHARDS separates security and redundancy by utilizing two levels of secret splitting in a way that allows the original data to be reconstructed from the stored pieces. However, the data structures used in POTSHARDS are also designed in such a way that an unauthorized user attempting to collect sufficient shares to reconstruct any data will not go unnoticed. An evaluation of our POTSHARDS implementation shows that it stores and retrieves data at 2.5-5 MB/s, demonstrates its ability to recover user data given all of the pieces a user has stored across the archives, and shows its ability to recover from the loss of an entire archive.
Date: Tuesday, April 3rd, 2007
Time: 11 am— 12:15 pm
Place: ECE 118
Bert Tanner
Department of Mechanical Engineering
University of New Mexico
Abstract:
We present an approach to capturing concurrent cooperative behavior in multi-robot
systems based on formal methods. We show a new way for abstracting a hybrid
dynamical system describing the concurrent evolution of multiple robotic team
dynamics into a discrete model of computation, to be used for planning cooperative
behavior at a later stage. Closed loop switched robotic behaviors can be expressed
in terms
of a special context-free grammar. We define merge operators to express the composition
of several grammars in an effort to formally describe concurrent multi-robot
systems. The special, simple structure of these grammars allow us to exploit
known results in the field of basic process algebras, and show that language
equivalence for these types of systems is decidable up to bisimulation.
Bio:
Dr. Tanner was born in Atlanta GA in 1973 and shortly after he moved to Athens,
Greece. In 1990 he entered the National Technical University of Athens (NTUA)
through examinations ranking 8th nationally. He received his Eng. Diploma in
1996 (5 year program) in mechanical engineering from NTUA, (magna cum laude-top
3%). In 2001 he received his Ph.D. degree in mechanical engineering from NTUA.
From 2001 to 2003 he was a post doctoral fellow with the Department of Electrical
and Systems Engineering at the University of Pennsylvania. In 2003, he joined
the faculty of the Department of Mechanical Engineering at the University of
New Mexico where he is currently an assistant professor. Since 2005, he holds
a secondary appointment with the Electrical and Computer Engineering Department
at UNM. He was a finalist during the 2003 Euron G. Giralt PhD award, and he
received the National Science Foundation Career Award in 2005. His research
interests include cooperative planning and control of interconnected multi-agent
systems, coordination of mobile sensor and actuation networks, nonholonomic
motion planning and control, hybrid modeling of embedded control systems,
and mobile manipulation of deformable material. Dr. Tanner is a member of
the Control Systems Society and the Robotics and Automation Society of the
IEEE. He is an Associate Editor for the IEEE Robotics and Automation Magazine
and serves in the Conference Editorial Boards of the IEEE Control Systems
and Robotics and Automation Societies.
Date: Thursday, March 29th, 2007
Time: 11 am— 12:15 pm
Place: ECE 118
Andrew Childs
California Institute of Technology
Abstract:
Quantum mechanical computers would be much faster than ordinary classical computers at solving certain problems, such as factoring integers. However, the full extent of the computational power of quantum mechanics is not well understood. In this talk, I will describe recently developed quantum algorithms that outperform classical computation. These algorithms are based on efficient procedures for identifying quantum states. A simple example of a problem that can be solved in this way is the abelian hidden subgroup problem (HSP), the core problem solved by the factoring algorithm. I will explain how entangled measurements can be used to extend this approach to certain nonabelian HSPs. I will also describe how a similar approach can be applied to a new generalization of the abelian HSP, namely a problem of finding hidden nonlinear structures.
Date: Friday, March 9th, 2007
Time: 3 pm — 4 pm
Place: ECE 118
Donna J. Cox
University of Wisconsin
Abstract:
Professor Donna J. Cox will present a visual feast of digital data-driven
scientific visualizations from her collaborative work at the National Center
for Supercomputing Applications, University of Illinois at Urbana-Champaign.
She coined the term "Renaissance
Teams" in 1985 to describe interdisciplinary research teams focused on
challenges in scientific visualization. In the early years, teams were
small local groups, but now through grid technologies, many teams are distributed
remotely and work collaboratively through global technologies. She collaborates,
designs and aesthetically directs data-driven visualizations of supercomputer
simulations from a variety of disciplines ranging from oceanography to
astrophysics. She demonstrates both rigorous design methodologies and works
with educators and focus groups to provide solutions to the challenging
work of data-driven visualizations for non-expert audiences. She and her
team have developed new visualization methods and virtual reality tools
that provide novel ways for researchers to collaborate over remote distances
and high-speed networks. Cox will describe these technologies and the resulting
high-resolution, high-fidelity digital scientific animations for museums
and high-definition television.
Technology is exponentially transforming the way that people collaborate, scientists ‘see’, and designers invent. Cox will present a voyage of how art and science have converged through computer graphics technology. Her greatest passion is to employ new technologies to inspire general audiences with the awe and mystery of science. This presentation will demonstrate advanced graphical techniques in the making of ‘high art’ science for insight and public presentation.
Bio:
Donna J. Cox is Director of the Advanced Visualization Laboratory
at the National Center for Supercomputing Applications (NCSA), University
of Illinois at Urbana-Champaign (UIUC). She is Professor in the School
of Art and Design and a recognized pioneer in computer art and scientific
visualization. Cox’s collaborations in data-driven
visualizations are featured in a variety of large-format venues around the
world including digital planetaria. Her greatest passion is to bring science
to a wide range of audiences through innovative and aesthetic presentations.
She was Art Director and Producer of Scientific Visualization for the science
educational IMAX film "Cosmic Voyage," nominated for 1997 Academy Award and
funded by National Science Foundation, Smithsonian Institute, and the Motorola
Foundation. She and her team have thrilled millions of people with visualizations
for such programs as the PBS NOVA “Hunt for the Supertwister” and “Runaway
Universe.” Cox
and NCSA worked with the American Museum of Natural History to produce high-resolution
visualizations for the Hayden Planetarium’s 2000 Millenium show, “Passport
to the Universe” and “The Search for Life: Are We Alone?” She
received the International Coler-Maxwell Award in for her seminal paper, “Using
Supercomputer to Visualize Higher Dimensions: An Artist's Contribution to
Science" where
she coined the term ‘Renaissance Teams.’ She was elected SIGGRAPH
Director at large for four years and SIGGRAPH 2005 Emerging Technologies
Chair and is currently on the Editorial Board of Leonardo: International
Journal of Art, Science and Technology. Cox has authored book chapters and
articles including the recent “Visualization and Visual Metaphors,” in Aesthetic
Computing, ed. Paul Fishwick, MIT
Press, 2006. She was recently honored at the Chicago Museum of Science
and Industry as one of 40 selected modern-day Leonardo DaVinci’s. She and
her team have recently finished a high-definition PBS NOVA show, “Monster
of the Milky Way,” that premiered Halloween 2006.
Note:
This colloquium/seminar
is sponsored by Electrical and Computer Engineering (ECE)
and cross-listed here for interested CS people.
Date: Thursday, March 8, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Mihai Christodorescu
University of Wisconsin
Abstract:
In recent years, viruses and worms have started to pose threats at
Internet scale in an intelligent, organized manner, enrolling millions of unsuspecting
and unprepared PC owners in spamming, denial-of-service, and phishing activities.
In January 2007, Vint Cerf stated that "of the 600 million computers
currently on the Internet, between 100 and 150 million were already part
of these botnets." A otnet is a network of malware-infected machines
that are under the control of one attacker. The fundamental cause of the
current situation is the limitations inherent in current detection technologies.
Commercial virus scanners have low resilience to new attacks because malware
writers continuously seek to evade detection through the use of obfuscation.
Any malware-detection technique that can counter these attacks must be able
to (1) identify malicious code under the cover of obfuscation and (2) provide
some guarantee for the etection of future malware. In my talk, I present
a new approach to the detection of malicious code that addresses these requirements
by taking into account the high-level program behavior without an increase
in false positives. The cornerstone of this approach is a formalism called
malspecs (i.e., specifications of malicious behavior) that incorporates instruction
semantics to gain resilience to common obfuscations. Experimental evaluation
demonstrates that our behavior-based malware-detection algorithm can detect
variants of malware due to their shared malicious behaviors, while maintaining
a relatively low run-time overhead (a requirement for real-time protection).
Additionally, the malspec formalism enables reasoning about the resilience
of a detector. In this context, I present a strategy for proving the soundness
and completeness of detection algorithms.
Bio:
Mihai Christodorescu holds a Bachelor's degree in Computer Science
from University of California at Santa Barbara and a Master's degree in Computer
Sciences from University of Wisconsin, Madison, where he is currently a doctoral
candidate. His research is in computer security with a current focus on the
detection of malicious software. He is also interested in and has worked
on problems in software engineering, program analysis, and formal methods,
as well as their applications to security.
Date: Tuesday, March 6, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Michela Taufer
University of Texas, El Paso
Abstract:
The increasing use of multi-scale modeling in scientific applications
can take advantage of highly distributed volunteer computing and new paradigms
for using these computational resources. Such an approach is in need of runtime
adaptations that are driven by specific characteristics of the scientific
application and available computational resources. The resulting adaptive
algorithms must be integrated in cyber-infrastructure environments to assist
scientists in their day-to-day research.
The DAPLDS project, (Dynamically Adaptive Protein-Ligand Docking System) uses volunteer computing resources across the Internet through the Docking@Home website to study putative drugs for diseases such as HIV/AIDS and Cancer. This specific scientific application performs all-atom simulations of drug-like small molecules (ligands) docking to the 3D-structure of a target protein. This computational approach is used by scientists for structure-based-drug-design. DAPLDS serves as a case study to determine if adaptive selection of computational models can positively affect the final accuracy of simulation predictions. DAPLDS also aims to determine if simultaneous adaptive selection of computational resources can ultimately improve simulation throughput and accuracy.
It is our goal to demonstrate that this general paradigm proposed in the DAPLDS project can scale beyond single applications and systems to many other scientific projects that employ grid computing, and high-performance computing.
Bio:
Michela Taufer is Assistant Professor at the Department of Computer
Science at the University of Texas-El Paso. Her research interests include
new algorithms and architectures for resource- and time-expensive applications
in computational chemistry, physics, and biology as well as effective migration
of large-scale simulations to volunteer computing projects based on public
resources. Dr. Taufer earned her MS in Computer Engineering from the University
of Padua and her Ph.D. in Computer Science from the Swiss Federal Institute
of Technology Zurich.
Date: Thursday, March 1, 2007
Time: 11 am — 12:15 pm
Place: SUB, Lobo A/B room
Thomas Hayes
University of Chicago
Abstract:
Markov chains are widely used to efficiently sample from complex distributions over sets of exponential size.
They are at the heart of some of our best algorithms, e.g. for estimating the volume of high-dimensional convex
bodies, estimating the permanent, and sampling configurations of the Ising model in statistical physics.
An unusual feature of these algorithms is that their practical implementation relies crucially on a good
theoretical understanding of the convergence rate of the underlying Markov chain: running the chain for too
many steps wastes resources, but not running long enough produces the wrong output.
I will discuss some recent advances in the analysis of Markov chain Monte Carlo (MCMC) algorithms, using the following problem as a touchstone example.
A (proper) coloring of a graph is an assignment of a color to each vertex so that no two adjacent vertices receive the same color. Assuming the number of colors is at least D+1, where D is the maximum degree of the graph, a simple greedy algorithm produces such a coloring in linear time. But suppose we want to efficiently produce not just any coloring, but rather a typical (i.e., uniformly random) coloring. How many colors do we need now?
Here is a very simple MCMC algorithm: start with any coloring, for instance one obtained by the greedy algorithm. In each step, recolor a randomly chosen vertex with a randomly chosen color, subject to the constraint that adjacent vertices cannot be assigned the same color. It is not hard to see that, given enough colors, this algorithm generates a nearly-uniform random coloring in nearly-linear time. The problem is: how many colors are "enough"?
I will give an overview of recent advances on this problem, each based on extensions of the classical "coupling" method for analyzing Markov chain convergence.
Date: Tuesday, February 27, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Yu (David) Liu
John Hopkins University
Abstract:
You may have the skills to build a garage, but will the same skills suffice for building a skyscraper? Research on software construction is not new, but scale changes everything. Next-generation software systems can easily grow into billions of lines of code, with complex and decentralized structures. Hundreds of inter-dependent platforms, sensors, and devices are commonly involved. Continuous evolution after initial deployment is often expected. This talk will illustrate why software foundations today -- programming models in particular -- are lagging behind the need from these Ultra-Large-Scale (ULS) systems, and how we can catch up by shifting the design focus on modeling the interactions of different building blocks underlying modern software.
In this talk, I will describe how the interaction-oriented philosophy can reshape the landscape of two fundamental programming models: components and objects. The new component model, called Assemblages, provides novel mechanisms to bridge time-honored modularity research with modern ULS systems. It unifies static linking, dynamic linking, and distributed communication in one compact model, and can potentially impact distributed programming and deployment for next-generation systems such as sensor networks. A structurally similar but semantically different design can also be applied to object-oriented programming. The resulting new language, named Classages, captures the complex object interactions in a well-structured fashion, and facilitates round-trip software engineering. Both Assemblages and Classages are equipped with provably sound typechecking mechanisms to capture a wide range of bugs. Both compilers have been implemented.
Bio:
David is a Ph.D. candidate from the Johns Hopkins University. His passion lies in designing novel techniques for developing better software. His research interests span most areas related to this central theme, e.g. programming languages, software engineering, security, and software support for emerging domains such as sensor networks.
Date: Thursday, February 22, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Lorenzo Torresani
Stanford University
Abstract:
In this talk I will describe methods for the acquisition of computational models from visual data. In computer graphics, sophisticated models are necessary for simulation of real phenomena. In computer vision, visual models are used as a form of prior knowledge to disambiguate the otherwise ill-posed problem of image understanding. Traditionally, these models are manually constructed. My work proposes the application of machine learning algorithms that can automatically extract highly detailed models from visual observations.
I will begin with an algorithm for recovering non-rigid 3D models from image streams, without the use of training data or any prior knowledge about the modes of deformation of the object. I will describe how this method can be used to reconstruct subtle human body deformations in 3D space from single-view video under severe cases of occlusion and variable illumination.
I will then present a technique for learning classification features from high-dimensional data, such as images consisting of thousands of pixels. When applied to a set of face images with identity labels, this algorithm automatically extracts the features that are most salient for the purpose of identification and discards visual effects unimportant for recognition, such as non-uniform illumination and facial expressions.
I will conclude by describing a system for motion style editing based on a style model learned from human perceptual observations and motion capture data. This system enables users to create novel versions of pre-recorded motion sequences in desired styles.
Bio
Lorenzo Torresani is a Senior Researcher at Riya, Inc. He received a Laurea Degree in Computer Science with summa cum laude honors from the University of Milan (Italy) in 1996, and an M.S. and a Ph.D. in Computer Science from Stanford University in 2001 and 2005, respectively. His previous professional experience includes working as a researcher at the Courant Institute of New York University, and at Digital Persona, Inc. His research interests are in computer vision, computer animation, and machine learning. He received the Best Student Paper Award at the IEEE Conference On Computer Vision and Pattern Recognition, 2001, for his work on visual tracking and 3D reconstruction of non-rigid objects.
Date: Tuesday, February 20, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Anna Cinzia Squicciarini
Purdue University
Abstract:
Trust negotiation is a key technology for establishing trust in open systems,
in which sensitive interactions may occur between entities with no prior
knowledge of each other. Although several proposals for the management of
trust negotiation have been proposed, none of them provide a comprehensive
approach to the problem of on-line mutual authorization with privacy guarantees.
In this talk, I will present approaches to the problem of privacy in trust
negotiation systems.
Specifically, the focus of this talk will be on Trust-X, an XML-based system for trust negotiation. I will discuss the main features of Trust-X, from the policy specification language, to the strategies for efficiently carrying on negotiations, to the approaches adopted for addressing privacy issues. In discussing the trust negotiation strategies, I will illustrate how they can be used in different application domains, on the basis of negotiators’ security requirements. I will also present techniques for supporting anonymity in the context of a privacy-preserving trust negotiation; such techniques can be used to carry on trust negotiations without revealing identity related information.
An overview of the architecture of Trust-X prototype system, which has been implemented using a web service architecture, will be discussed. I will show the main evaluation results of the systems performance and evaluate the complexity of the defined strategies. I will conclude by discussing possible extensions and outlining the next research challenges that need to be addressed for a deployment and applicability of such a promising technology.
Bio:
Anna Cinzia Squicciarini is a post doctoral associate at the Computer
Science Department at Purdue University. She also collaborates as a security
researcher with the ItaP group at Purdue University for the NSF TeraGrid
project. She received her PhD in Computer Science from the University of
Milan, Italy in March, 2006. During fall 2003 she was a visiting researcher
at Swedish Institute of Computer Science, Stockholm, and during spring 2004
she was a research scholar at Colorado State University, Fort Collins (CO),
US.
She also spent several months at Purdue University before her graduation as visiting student.
Her main research interests include trust negotiations, privacy, and access control for grid computing systems. She has been the main architect of Trust-X, an advanced trust negotiation system supporting privacy policies, anonymity, and recovery techniques. Currently, she is exploring issues related to identity management systems and the integration of biometric techniques with existing authentication mechanisms.
Date: Thursday, February 15, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Andrea Thomaz
MIT
Abstract:
This talk introduces a paradigm, Socially Guided Machine Learning, that takes a human-machine interaction perspective on Machine Learning. This approach asks, How can systems be designed to take better advantage of learning from a human partner and the ways that everyday people approach the task of teaching? In this talk I describe two novel social learning systems, on robotic and computer game platforms. Results from these systems show that designing agents to better fit human expectations of a social learning partner both improves the interaction for the human and significantly improves the way machines learn.
Sophie is a virtual robot that learns from human players in a video game via interactive Reinforcement Learning. A series of experiments with this platform uncovered and explored three principles of Social Machine Learning: guidance, transparency, and asymmetry. For example, everyday people were able to use an attention direction signal to significantly improve learning on many dimensions: a 50% decrease in actions needed to learn a task, and a 40% decrease in task failures during training.
On the Leonardo social robot, I describe my work enabling Leo to participate in social learning interactions with a human partner. Examples include learning new tasks in a tutelage paradigm, learning via guided exploration, and learning object appraisals through social referencing. An experiment with human subjects shows that Leo's social mechanisms significantly reduced teaching time by aiding in error detection and correction.
Bio:
Andrea Thomaz is a Post-Doctoral Associate in the Media Laboratory at the Massachusetts Institute of Technology, where she also received her Ph.D. in 2006. Previously, Andrea obtained a B.S. in Electrical and Computer Engineering from the University of Texas at Austin in 1999. Her research interests span Machine Learning, Robotics, Human-Robot Interaction, and Cognitive Science, with the goal of developing machines that learn new tasks and goals from ordinary people in everyday human environments.
Date: Thursday, February 8th, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Dahai Xu
Electrical Engineering Department
Princeton University
Abstract:
In this talk, I will focus on Intra-domain traffic engineering where
network operators control the flow of traffic in their networks by tuning
the configuration of link-state routing protocols. We settle a long-standing
question with a positive answer: optimal traffic engineering can be realized
using just link-state routing protocols. In conventional link-state protocols
like OSPF and IS-IS, routers compute shortest paths from link weights, splitting
traffic evenly when multiple shortest paths exist. Unfortunately, for these
protocols, computing the optimal link weights for a given traffic matrix
is an NP-hard problem. Even the best setting of the weights can lead to traffic
loads that deviate significantly from the ideal distribution of the traffic.
In this work, we show that splitting traffic over multiple paths as an exponential
function of path length can achieve optimal traffic engineering. We also
present a polynomial-time algorithm for computing the optimal link weights,
which in practice is also much more efficient than the local-search heuristics
used today. In addition, we show how to apply the algorithm to robust traffic
engineering, where one set of link weights must perform well for multiple
traffic matrices or failure scenarios. These results are based on our new
Network Entropy Maximization framework for routing-protocol design, which
parallels the role of Network Utility Maximization for congestion control.
Bio:
Dr. Dahai Xu is currently a Postdoctoral Research Associate in the Department
of Electrical Engineering at Princeton University. He received his Ph.D.
degree in Computer Science and Engineering from University at Buffalo in
2005. His research interests include survivability and restoration in IP/MPLS,
optical networks, network design and protocol development for next generation
Internet and performance evaluation (modeling, simulation and measurements).
More information can be found at http://www.princeton.edu/~dahaixu/.
Date: Tuesday, February 6th, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Jason Hartline
Microsoft Research
Abstract:
The Internet has emerged the single most important platform for
resource sharing among parties with diverse and selfish interests and is
the most interesting economics system of our time. As rapidly as new Internet
systems are developed and deployed, Internet users find ways to exploit them
for their own benefit in ways not intended by the designers. Prevalent undesirable
examples of this phenomenon include spam in email systems, bid-sniping in
eBay's auction marketplace, free-loading in file-sharing networks, and click-fraud
in Internet advertising. The existence of these economic bugs brings to fore
the urgent problem of developing a principled approach to designing systems
where exploits are not economically viable. This is the general question
addressed by the research area of algorithmic mechanism design.
Techniques from algorithms are very well suited for addressing both new and traditional economic questions. I will discuss the impact of four algorithmic techniques: computation, approximation, competitive analysis, and reduction. I will focus on their application to the economic objective of profit maximization.
Bio:
Jason has been a researcher at Microsoft Research, Silicon Valley
since 2004. Prior to that he was a postdoctoral research fellow at the Aladdin
Center at Carnegie Mellon University. He received his Ph.D. in Computer Science
from the University of Washington in 2003 with advisor Anna Karlin and B.Ss
in Computer Science and Electrical Engineering from Cornell University in
1997. He has been working on topics in algorithmic mechanism design since
1999.
Date: Thursday, February 1, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Jed Crandall
University of California, Davis
Abstract:
My research philosophy is to approach security not as a problem to
be solved, but as a battle for defenders (such as antivirus professionals,
law enforcement, and next-generation security technology developers) to wage;
so my goal is to provide them with the tools they need, both as implementations
of actual techniques they can use, and as theory that is firmly grounded in
practice and can be applied to the situations that they face. This talk will
cover two projects I have worked on: DACODA (DAvis malCODe Analyzer) and Temporal
Search.
The threat of malware, such as worms and botnets, to the Internet infrastructure and other parts of the information economy is constantly growing and evolving. Where simple worms had once wreaked senseless havoc and vandalized hundreds of thousands of systems, now large botnets carry out the instructions of organized criminal enterprises - not because the former problem is solved, but because the threat has developed. One promising line of defense is network signatures that detect the exploits that worms and botnets use to spread. While malware writers could use polymorphism and metamorphism to change the network signature of their malware, they have not done so except in a very limited fashion, probably because defenses are not mature enough to warrant the effort. Given a lack of significant polymorphic and metamorphic worms and botnets in the wild, how can we assess the ability of defenses to protect against polymorphism and metamorphism before those defenses are deployed?
DACODA is a full-system implementation of symbolic execution for analyzing worm exploits. As a worm exploits a vulnerability on a victim host, such as a buffer overflow, there are particular bytes of the network traffic that cannot be changed without causing the attack to fail, for example "GET HTTP" cannot be removed from the Code Red worm exploit or the attack will not work. We used DACODA as a tool to quantify and study the limits of polymorphism and metamorphism and develop a theory to understand this threat to signature-based worm defenses. This theory is based on the intricacies of 14 real exploits that we analyzed, seven of them actual attacks or worms on our Minos/DACODA Internet honeypots.
We have also looked at the problem of responding to malware that has already spread out enough to cause a threat. Temporal search is a behavior-based analysis technique using virtual machines where it is possible to discover that a piece of malware is counting down to some event in the future (when it might, for example, delete all of your files or download new instructions from a public web server) without waiting for the event to occur. It is based on slight time perturbations, symbolic execution, predicate inversion, and then a weakest precondition analysis to account for quirks in the Gregorian calendar (leap years, number of days in each month, etc.). Applying a prototype of this technique to six real worms taught us a lot about timebomb attacks and behavior-based malware analysis in general.
Date: Tuesday, January 30th, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Joe Kniss
University of New Mexico
Abstract:
Scientific visualization is a discipline that joins data analysis
and human visual perception. Visualization addresses a need for high performance,
interactive data exploration of ever increasing size and complexity. My recent
focuses on volume visualization techniques. Volume visualization deals with
the discovery and display of important features embedded in three-dimensional
data. My research aims to enhance interactive direct volume rendering visualization
applications by focusing on the design of transfer functions for multi-variate
data and shading models for visualizing this kind of data. I will discuss three
main contributions: analytic transfer functions for interactive classification
of arbitrary multi-field datasets, data-space reparameterization for encoding
pre-classified data, and interactive volumetric light transport for rendering
multi-field data. We use these techniques to show that high quality multi-dimensional
transfer functions, classification, and light transport algorithms are beneficial
for interactive three-dimensional multi-field volume rendering in medical and
scientific applications.
Bio:
Joe is currently a Visiting Assistant Professor at the University of
New Mexico. He received a PhD from the University of Utah in 2006. During
this time, he was a DOE High Performance Computer Science Graduate Fellow.
He received a Masters of Science from the University of Utah in 2002 and
a Bachelors of Science from Idaho State University in 1999. He recently co-authored
a book, "Realtime Volume Graphics", which covers foundational and
advanced methods for volume rendering. His research interests include computer
graphics, visualization, human-computer interaction, pattern recognition,
image processing, and virtual environments.
Date: Tuesday, January 23rd, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Daniel A. Reed
Renaissance Computing Institute
University of North Carolina at Chapel Hill
Abstract:
Legend says that Archimedes remarked, on the discovery of the lever,
"Give me a place to stand and I can move the world." Today, computing
pervades all aspects of society. "Science" and "computational science" have
become largely synonymous, and computing is the intellectual lever that opens
the pathway to discovery in diverse domains. As new discoveries increasingly
lie at the interstices of traditional disciplines, computing is also the
enabler for scholarship in the arts, humanities, creative practice and public policy.
This talk surveys some of these challenges and opportunities. On the policy front, this includes the new Computing Community Consortium (CCC) and the upcoming PCAST report on the NITRD program. On the technical front, it includes possible approaches to resilient Grid and HPC system design, ranging from intelligent hardware monitoring and adaptation, through low-overhead recovery schemes, statistical sampling and differential scheduling and to alternative models of system software.
Date: Thursday, January 18th, 2007
Time: 11 am — 12:15 pm
Place: ECE 118
Ethan L. Miller
University of California, Santa Cruz
Abstract:
The data storage needs of large high-performance
and general-purpose computing environments are generally best served by distributed
storage systems because of their ability to scale and withstand individual
component failures. Object-based storage promises to address these needs
through a simple networked data storage unit, the Object Storage Device (OSD) that manages all local storage issues and exports a
simple read/write data interface. Despite this simple concept, many
challenges remain, including efficient object storage, centralized
metadata management, data and metadata replication, data and metadata reliability, and security.
This talk will open by detailing the object-based storage paradigm and showing how it improves on existing storage paradigms. I will then describe the Ceph file system developed at the UC Santa Cruz Storage Systems Research Center (SSRC). Ceph is designed to provide data at an aggregate of hundreds of gigabytes per second to tens of thousands of clients while handling individual component failures, and can also handle 250,000 metadata requests per second, removing bottlenecks in opening files for high-bandwidth use. This talk will focus on two aspects of Ceph: the decentralized algorithm used to distribute data to thousands of storage devices, and the scalable mechanisms that Ceph uses to ensure that access to the file system is secure.
Bio:
Ethan Miller is an associate professor of computer science at
the University of California, Santa Cruz, where he is a member of the
Storage Systems Research Center (SSRC). He received his ScB from Brown
in 1987 and his PhD from UC Berkeley in 1995, where he was a
member of the RAID project. He has written over 75 papers in areas including large-scale storage systems, file systems for
next-generation storage technologies, secure file systems, distributed systems, and information retrieval. His current research interests
include issues in petabyte-scale storage systems, file systems for non-volatile RAM technologies, and long-term archival storage systems.
His broader interests include file systems, operating systems, parallel
and distributed systems, and computer security. He can be contacted at