UNM Computer Science

Colloquia



A security research plan for vehicle system

Date: Friday, December 5th, 2008
Time: 2 pm — 3:15 pm
Place: ME 218

Hisashi Oguma
Toyota InfoTechnology Center, Co., Ltd.

Abstract:
I will describe a security research project of Toyota InfoTechnology Center. The ratio of electronics to vehicle equipment is steadily increasing. And novel vehicles will also have connectibility to public networks to provide many kinds of services. Therefore, they are expected to suffer from a wide variety of threats and the electronic control units (ECUs) embedded in them may execute malicious programs because of tampering. A requirement of vehicle system is difference to conventional embedded systems, e.g. cellular phone or home appliances. So it it hard to introduce existing technologies into vehicle systems without modifications. In this talk I will show these requirements and feature plan of our research.

Bio:
Hisashi Oguma is currently a researcher in Toyota InfoTechnology Center, Co., Ltd. He received a Ph. D. in computer science from the University of Electro-Communications, Japan in 2002 and worked at NTT DoCoMo from 2002 through 2007. He has been a visiting researcher at DoCoMo Communication Laboratories USA, Inc. from Oct 2004 through Mar 2005. His research focuses on parallel and distributed computing, mobile computing, ubiquitous computing and security for embedded system.

Anomalous Change Detection in Remote Sensing Imagery

Date: Friday, November 21st, 2008
Time: 2 pm — 3:15 pm
Place: ME 218

James Theiler
Space and Remote Sensing Sciences Group
Los Alamos National Laboratory

Abstract:
The detection of actual changes in a pair of images is confounded by the inadvertent but pervasive differences that inevitably arise whenever two pictures are taken of the same scene, but at different times and under different conditions. These differences include effects due to illumination, calibration, misregistration, etc. If the actual changes of interest are assumed to be rare, then one can ``learn'' what the pervasive differences are, and can identify the deviations from this pattern as the anomalous changes.

While the straight anomaly detection problem is often expressed as a "one-class" problem, I will argue that it is really a two-class problem where the second (aka background or reference) class has implicit properties that are not always acknowledged. By adapting this background class, one can re-orient the machine learning methodology for anomaly detection to work for anomalous change detection.

I will describe some of these theoretical issues as well as more practical problems that arise in the anomalous change detection problem, and show some recent results from an ongoing project at Los Alamos.

Dependable Distributed Computing at AT&T Labs Research

Date: Thursday, November 13th, 2008
Time: 3:30 pm — 5:00 pm
Place: DSH 233

Mary Fernandez
AT&T Labs Research
University of New Mexico

Abstract:
In the Dependable Distributed Computing Research group at AT&T Labs Research, our research focusses on understanding and improving the performance, reliability, security, and management of networks and distributed systems. Despite this broad charter, our many projects share a common philosophy of using formal methods to model and analyze networks and distributed systems and of using declarative languages to specify and implement them. This philosophy yields practical benefits: High-level programming abstractions are semantically transparent, permit static analysis, and result in more secure and reliable systems.

In this talk, I will give an overview of the research in our department, including our work in VoIP and secure overlay networks. I will then focus on Yakker, a tool for generating application-specific firewalls. Yakker firewalls examine network protocol messages that are the inputs to network servers and filters out malformed messages as well as other messages that can trigger, for example, buffer overflows. The novelty of Yakker is that it is highly automated, producing a simple firewall directly from existing human-readable specifications, namely, the “Request for Comments” documents that define most of the basic protocols of the Internet.

Bio:
Mary Fernandez is Executive Director of Dependable Distributed Computing Research at AT&T Labs Research. Her own research sits at the juncture of database systems and programming languages and focuses on domain-specific languages for data management in centralized and distributed environments. She has published more than 30 articles in leading conferences and journals. In addition, she is co-editor of several World-Wide Web Consortium (W3C) recommendations, is Secretary of ACM SIGMOD, is an advisory council member of MentorNet (www.mentornet.net), and is a former associate editor of ACM Transactions on Database Systems.

Energy-aware Server Provisioning and Load Dispatching for Connection Intensive Internet Services

Date: Friday, November 7th, 2008
Time: 2 pm — 3:15 pm
Place: ME 218

Wenbo He
Department of Computer Science
University of New Mexico

Abstract:
During the past 15 years, we have witnessed the tremendous growth of Internet applications. Energy consumption for hosting Internet services is becoming a pressing issue as these services scale up. Dynamic server provisioning techniques are effective in turning off unnecessary servers to save energy. Such techniques, mostly studied for request-response services, face unique challenges in the context of connection servers that can host a large number of long-lived TCP connections. Such servers usually limit on how many new connections they can accept per second and, therefore, a server cannot be fully utilized immediately after it is turned on. Moreover, before a server is turned off, all its active connections need to be reconnected or migrated to other currently active servers. In this talk, I will characterize unique properties, performance, and power models of connection servers, based on a real data trace collected from the deployed Windows Live Messenger. Using the models, we design server provisioning and load dispatching algorithms and study subtle interactions between them. We show that our algorithms can save a significant amount of energy without sacrificing user experiences.

Bio:
Wenbo He is currently an assistant professor in Department of Computer Science at University of New Mexico. Her research focuses on pervasive computing, cyber physical systems, security and privacy issues in networked embedded systems, Internet services and service oriented computing etc. Wenbo got Ph.D. from UIUC in 2008, and she got M.S. from ECE Department at UIUC and M.Eng. from the Department of Automation at Tsinghua University, in 2000 and 1998 respectively. She received her B.S. from the Department of Automation at the Harbin Engineering University in 1995. During August 2000 to January 2005, Wenbo was a software engineer in Cisco Systems, Inc. Wenbo received the Mavis Memorial Fund Scholarship Award from College of Engineering of UIUC in 2006, and the C. W. Gear Outstanding Graduate Award in 2007 from the Department of Computer Science at UIUC. She is also a recipient of the Vodafone Fellowship from 2005 to 2008, and the NSF TRUST Fellowship in 2007.

Constraint Satisfaction for First-Order Logic

Date: Friday, October 31st, 2008
Time: 2 pm — 3:15 pm
Place: ME 218

William McCune
Department of Computer Science
University of New Mexico

Abstract:
I will describe the problem of searching for finite models of statements in first-order and equational logic. This is a kind of finite-domain constraint satisfaction. So far, the methods have been used mostly to search for counterexamples to conjectures, serving as a complement to programs that search for proofs. The expressiveness of the language and the power of the methods point to wider applications. The methods will be illustrated by using the Mace4 program on problems in abstract algebra and on several puzzles. The problem of isomorphic solutions will be addressed. Some background on automated deduction in first-order and equational logic will be included.

Bio:
William McCune has been working in automated deduction since 1981. He received a Ph.D. in computer science from Northwestern University in 1984 and worked at Argonne National Laboratory from 1984 through 2006, ending his tenure there as a senior computational logician. He was also a senior research fellow at the University of Chicago Computation Institute. He has been a part-time research professor in the UNM CS department since 2007, working on several projects with Prof. Robert Veroff. McCune received a Royal E. Cabell research fellowship in 1984, chaired the International Conference on Automated Deduction in 1997, and received the Herbrand Award for Distinguished Contributions to Automated Deduction in 2000. He is the primary author of the automated deduction systems Otter, EQP, Prover9, and Mace.

Phase Transitions in Computer Science, Statistics, and Physics

Date: Friday, October 24th, 2008
Time: 2 pm — 3:15 pm
Place: ME 218

Thomas Hayes
Computer Science Department
UNM

"Phase transition" refers to a phenomenon, observed in many large-scale physical systems, wherein, as one parameter is slowly changed, a sudden sharp change in the qualitative behavior of the system is observed. For instance, as temperature increases past 100C, a dramatic change takes place in a pot of water. Similarly, an iron bar placed in a magnetic field will tend to suddenly magnetize (with most of the electron spins in the same direction) once the field strength passes a certain threshold.

Similar changes can be observed in a number of systems of interest to computer scientists and statisticians. For instance, one can consider the behavior of the number of satisfying solutions to a Boolean formula, as the number of clauses slowly increases (say, by adding clauses at random). Or one could consider the number of components in a random graph, as a function of the number of edges. Other, closely related topics include: (1) efficient sampling of random configurations of a system using MCMC algorithms, (2) decay of correlations as a function of distance, and (3) uniqueness of something called "Gibbs measure"

I will talk about some old and new results in this area.

Bio:
Prof. Tom Hayes received his Ph.D. in Computer Science at the University of Chicago. His thesis was on "Rapidly Mixing Markov Chains for Graph Colorings." Prior to joining UNM this fall, he did postdoctoral work at U.C. Berkeley and the Toyota Technological Institute at Chicago. His research interests include algorithms, probability, and machine learning.

Nonconvex compressive sensing: getting the most from very little information (and the other way around)

Date: Friday, October 10th, 2008
Time: 2 pm — 3:15 pm
Place: ME 218

Rick Chartrand
Los Alamos National Laboratory

In this talk we'll look at the exciting, recent results showing that most images and other signals can be reconstructed from much less information than previously thought possible, using simple, efficient algorithms. A consequence has been the explosive growth of the new field known as compressive sensing, so called because the results show how a small number of measurements of a signal can be regarded as tantamount to a compression of that signal. The many potential applications include reducing exposure time in medical imaging, sensing devices that can collect much less data in the first place instead of collecting and then compressing, getting reconstructions from what seems like insufficient data (such as EEG), and very simple compression methods that are effective for streaming data and preserve nonlinear geometry.

We'll see how replacing the convex optimization problem typically used in this field with a nonconvex variant has the effect of reducing still further the number of measurements needed to reconstruct a signal. A very surprising result is that a simple algorithm, designed only for finding one of the many local minima of the optimization problem, typically finds the global minimum. Understanding this is an interesting and challenging theoretical problem.

We'll see examples, and discuss algorithms, theory, and applications.

Bio:
Born in Canada, Rick Chartrand's education was in pure mathematics, receiving a Ph.D. from UC Berkeley in 1999 for work in functional analysis, in a subfield devoid of useful applications. He now works as an applied mathematician at Los Alamos National Laboratory. His research interests are focused on image and signal reconstruction from very incomplete information. He works on developing new algorithms, solving the underlying theoretical issues, and exploring useful applications.

Petascale Computational Science on Roadrunner

Date: Friday, October 3rd, 2008
Time: 2 pm — 3:15 pm
Place: ME 218

Timothy C. Germann
Deputy Group Leader of the Theoretical Chemistry & Molecular Physics Group (T-12)
LANL

Abstract:
I will describe the initial set of scientific applications and computational kernels that have been implemented on the hybrid Roadrunner supercomputer recently constructed by IBM for Los Alamos National Laboratory (LANL). Each Roadrunner "triblade" compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, typically utilized with a programming model of four MPI ranks (with one Opteron core and one Cell each) per node. I will describe our adventures first porting, and then drastically rewriting, a molecular dynamics code, SPaSM (Scalable Parallel Short-range Molecular dynamics), which is used for a wide range of scientific studies at LANL, ranging from fluid and materials dynamics to agent-based computational epidemiology. The computation of forces and updates of particle positions and velocities are performed on the Cells (each with one PPU and eight SPU cores), while the Opterons direct inter-rank communication and perform periodic I/O-heavy tasks including analysis, visualization, and checkpointing. The nearly perfect weak scaling measured for a standard Lennard-Jones pair potential benchmark yields 369 Tflop/s (double-precision) floating-point performance on the full Roadrunner system (3060 compute nodes), and is a finalist for the 2008 ACM Gordon Bell Prize to be announced at SC08 in November.

Bio:
Timothy C. Germann is Deputy Group Leader of the Theoretical Chemistry & Molecular Physics Group (T-12) at Los Alamos National Laboratory (LANL). He earned Bachelor of Science degrees in Computer Science and in Chemistry from the University of Illinois in 1991, and a Ph.D. in Chemical Physics from Harvard University in 1995. Following a Research Fellowship in the Miller Institute for Basic Research in Science at UC Berkeley, where he developed parallel algorithms for quantum molecular (reactive) scattering theory, Tim joined LANL in 1997, where he has used large-scale classical molecular dynamics simulations to investigate shock, friction, detonation, and other materials dynamics issues using BlueGene/L, Roadrunner, and other NNSA supercomputer platforms. Along the way, he and his collaborators developed a large-scale epidemiological simulation model and applied it to assess mitigation strategies for outbreaks of either naturally emerging or intentionally released infectious diseases, including pandemic influenza. He has co-authored over 100 peer-reviewed scientific publications, and received a 1998 IEEE Gordon Bell Prize, 2005 and 2007 LANL Distinguished Performance Awards, a 2006 LANL Fellows' Prize for Research, and a 2007 NNSA Defense Programs Award of Excellence.

Roadrunner: A Petaflop/s before its time

Date: Friday, September 26th, 2008
Time: 2 pm — 3:15 pm
Place: ME 218

Ken Koch
Technical Project Leader for Roadrunner
LANL, CCS-DO

Abstract:
The Roadrunner supercomputer was built by IBM for Los Alamos National Laboratory for the Advanced Simulation and Computing (ASC) Program. Roadrunner achieved 1.026 Petaflop/s running the TOP500 Linpack benchmark on May 26th, 2008 breaking the Petaflop/s barrier sooner than the TOP500 data would have predicted. Los Alamos conceived Roadrunner as a way to enable faster, more energy-efficient, and lower-cost computing through the use of acceleration devices in a hybrid computing design, in this case the Cell micro-processor as accelerator to an AMD Opteron. Many believe that the multi-core and many-core future of micro-processors will include the use of a non-uniform mix of devices and/or cores, some of which will have special functionality. Roadrunner is also a platform to prepare for that trend.

This talk on Roadrunner will provide the configuration details of this hybrid Cell-accelerated supercomputer and how it works. It will introduce the modified Cell processor and the hybrid TriBlade compute node developed for Roadrunner. The talk will also cover our applications experiences and the programming approach taken by early applications converted by Los Alamos staff to run on the accelerated Roadrunner machine; two of these applications are finalists for this year’s Gordon Bell award at SC08.

Bio:
Kenneth Koch has worked at Los Alamos National Laboratory since 1985 in the fields of nuclear weapons simulation and high -performance computing. He helped create the ASCI (now ASC) Program and served as the program manager of all ASC simulation codes at Los Alamos for several years. Since 2004 Ken has been involved in high-performance computing in the Computer , Computational, and Statistical Science Division (CCS) at LANL. He has led efforts in advanced computer architectures including one using FPGAs and GPUs for scientific computing. He was the main LANL architect behind the design and implementation of the Roadrunner Cell-accelerated hybrid supercomputer. Ken received a PhD in Nuclear Engineering from Purdue University in 1985 and also holds a Masters degree plus two Bachelors degrees in Nuclear and Mechanical Engineering, also from Purdue.

Implementing Scheme in a Virtual World

Date: Friday, September 12th, 2008
Time: 2 pm — 3:15 pm
Place: ME 218

Lance R. Williams
University of New Mexico
Department of Computer Science

Abstract:
At any given moment in time, hundreds of thousands of people worldwide are immersed in dozens of virtual worlds playing massive multiplayer online games (MMOG's). Second Life is unique among MMOG's because the players themselves create the content of the virtual world they inhabit. They do this (in large part) by means of computer programming and I believe this fact makes Second Life a potentially important resource for computer science educators.

Furthermore, because Second Life supports a programming model where a large number of small scripts execute in parallel and asynchronously, and communicate via message passing, it is an ideal testbed for research in many areas of computer science, including distributed computing, swarm robotics, self-assembly, distributed sensor networks, and artificial life.

In this talk, I first provide an overview of Second Life and its programming model. I then describe and demonstrate a series of evaluators for the Scheme programming language which I have constructed inside the game, including one evaluator where the heap and virtual machine are represented in a completely distributed fashion as a school of swimming fish. In this way, I hope to illustrate Second Life's value to computer science pedagogy and as a testbed for research in distributed computation.

Bio:
Lance R. Williams received his BS in Computer Science from the Pennsylvania State University in 1985 and his MS and Ph.D in Computer Science from the University of Massachusetts at Amherst in 1988 and 1994. His dissertation, in the area of computer vision, was on perceptual completion of surfaces which are only partially visible. After completing his Ph.D., he spent four years at NEC Research Institute in Princeton, NJ where he developed a series of increasingly more general neural models of the process used by the human visual system to compute the shape of object boundaries where they cannot be directly observed. In 1997, Dr. Williams joined the faculty of the Dept. of Computer Science at the University of New Mexico where he is currently an Associate Professor. His research since joining UNM has addressed a range of topics in computer vision, neural computation, digital image processing, and human and computer interaction. Since his first exposure to it in the early 1980's, he has had an enduring interest in the LISP programming language and its implementation.