UNM Computer Science

Colloquia



Robustness in Depth: Rebalancing Efficiency and Reliability in the Computational Stack

Date: Friday, December 4th, 2009
Time: 12 pm — 12:50 pm
Place: Centennial Engineering Center, Room 1041

David Ackley
Associate Professor
Dept. of Computer Science
University of New Mexico

Abstract:
The growth of the serial digital computer---making a CPU faster with a memory vaster---has now stalled, even as our ability to manufacture more, denser, and cheaper chips continues to expand. Although the efficiency of parallel architectures for general-purpose computation is often questioned, parallel hardware also offers the possibility of improving computational robustness. The future, somehow, will be massively parallel and distributed, components will come and go while computations continue, and---though our ability to pose ever larger computations will remain prodigious---it will often be as important to spend effort on robustness as on efficiency.

In this talk I will suggest that the traditional roles assigned to computer hardware and computer software---so revolutionary in the middle of the last century---are increasingly counterproductive and need to be renegotiated. Although hints of that process can already be seen, I will argue that for computer science as well as society at large, we would be better off recognizing the sea change that is now upon us.

A tabletop computational grid involving dozens of processors will be assembled and demonstrated.

Bio:
David Ackley received his Ph.D. in Computer Science from Carnegie Mellon, and was a member of the technical staff at Bellcore before joining the faculty at UNM. Research interests include artificial life and the connections between computation and biology, distributed and adaptive systems, and making things that do things by themselves.

Artificial Cells as Fixed-Points of Distributed Virtual Machines

Date: Friday, November 20th, 2009
Time: 12 pm — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Lance R. Williams
Associate Professor
Dept. of Computer Science
University of New Mexico

Abstract:
The thing which distinguishes animate from inanimate matter is that animate matter uses information processing to work against entropy and produce a state of increased order in the physical world. Even the simplest single cell organisms are able to translate self-descriptions stored on DNA molecules into copies of themselves. We believe that this remarkable feat, which more than any other defines life itself, is accomplished by means of a process which is intimately related to a topic at the heart of computer science, namely, compilation of programming languages.

Self-replicating systems based on von Neumann's universal constructor lack transparency and for this reason have had virtually no impact in biology. We believe that fundamental computational principles underlying their operation, e.g., self-reflection, are obscured by the complexity of low-level implementations, e.g., cellular automata. We wish to bridge the gap between principles and implementation by means of transparent automatic processes for translating abstract descriptions of self-reproducing machines into physical implementations. The abstract descriptions are expressions in high-level functional programming languages which are compiled into bytecode quines, i.e., virtual-machine fixed-points. Implementation of the bytecodes as lightweight processes, or actors, which accomplish evaluation by means of continuation passing, yields a self-replicating distributed virtual machine--or artificial cell.

Bio:
Lance R. Williams received the BS degree in computer science from the Pennsylvania State University and the MS and PhD degrees in computer science from the University of Massachusetts. Prior to joining UNM, he was a post-doctoral scientist at NEC Research Institute. His researches include computer vision and graphics, neural computation and digital image processing.

High Efficiency Model Identification for Statistical Graphical Models

Date: Friday, November 6th, 2009
Time: 12 pm — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Terran Lane
Associate Professor
Department of Computer Science
University of New Mexico

This is a joint work with Ben Yackley, Blake Anderson, Eduardo Corona, Curt Storlie, Karl Friston, and Will Penny.

Abstract:
Statistical graphical models, such as Bayesian networks, Markov random fields, or factor graphs, have become increasingly important for data modeling in fields as diverse as economics, ecology, physics, neuroscience, computer vision, and genomics. These models are attractive because they efficiently and compactly represent the often complex probability distributions that arise in such fields, and because they provide semantically rich models to domain scientists. However, often the graph structure of the target model is initially unknown -- indeed, in many cases the model structure is, itself, the quantity of interest to the domain scientist. The problem of structure identification (or model selection, if you prefer) remains a prominent open question in this field. Statistically, the problem is one of identifying (conditional, multivariate) dependencies from data, and is reasonably well understood. Computationally, however, the task is quite challenging: Worst case analysis shows that optimal structure identification is NP-hard, while practical algorithms are typically high-order polynomial runtime and may require many scans over the complete data in order to accumulate sufficient statistics. In this talk, we present preliminary work on a new approach to structure identification that exploits the topology of graph structure space itself. The key insight is that we need not compute the exact optimality criterion for every model we evalute during search, if we can approximate it well. And building good function approximators is precisely what Machine Learning and Statistics are very good at... We demonstrate how to use this insight to build a structure search algorithm that runs orders of magnitude faster than conventional approaches, while achieving results that are at least as good, if not better. We give preliminary data demonstrating our approach on a number of synthetic and real-world data sets, including some challenging neuroimaging data sets that involve hidden variables.

Bio:
Terran Lane is an associate professor of computer science at UNM. His primary (academic) interests are: machine learning; reinforcement learning, behavior, and control; and artificial intelligence in general. Professor Lane is also interested in computer/information security/privacy and bioinformatics.

Unraveling the Intricacies of Spatial Organization of the ErbB Receptors and Downstream Signaling Pathways

Date: Friday, October 30th, 2009
Time: 12 pm — 12:50 pm
Place: Centennial Engineering Center, Room 1041


Jeremy Edwards
Associate Professor
Dept. of Molecular Genetics and Microbiology, UNM Health Sciences Center, and Dept. of Chemical Engineering
University of New Mexico

Abstract:
Will be available shortly

Bio:
Prof. Edwards received both an MS and a PhD in Bioengineering from UCSD. He has held positions at Harvard Medical School and the University of Delaware before joining UNM in 2005. He is a member of the Cancer Research and Treatment Center at UNM.

Fear in Mediation, Exploiting the Windfall of Malice

Date: Friday, October 9th, 2009
Time: 12 pm — 12:50 pm
Place: Centennial Engineering Center, Room 1041


Jared Saia
Associate Professor
Department of Computer Science
University of New Mexico

Abstract:
In this talk, we consider a problem at the intersection of game theory and algorithms. Recent results show that the existence of malicious players in a game can, somewhat surprisingly, actually improve social welfare. This phenomena has been called the "windfall of malice". We ask: "Is it possible to achieve the windfall of malice, even without the actual existence of malicious players?" Surprisingly, we are able to show, that in some cases, the answer is yes. How can we achieve the beneficial impact of malicious players without their actual presence? Our approach is based on the concept of a mediator. Informally, a mediator is a trusted third party that suggests actions to each player. The players retain free will and can ignore the mediator's suggestions. The mediator proposes actions privately to each player, but the algorithm the mediator uses to decide what to propose is public knowledge. Surprisingly, it is possible to simulate a mediator, without the need of a trusted third party, through the technique of "cheap talk". This technique also applies to our own approach.

My talk will describe three results. First, we introduce a general method for designing mediators that is inspired by careful study of the windfall of malice effect. Second, we show the applicability of our approach by using it to design mediators for two particular games. Finally, we show the limits of our technique by proving an impossibility result that shows that for a large class of games, no mediator will improve the social welfare over the best Nash Equilibria.

Bio:
Jared Saia is an Associate Professor at the University of New Mexico. His broad research interests are in theory and algorithms with a focus on designing distributed algorithms that are robust against a computationally unbounded adversary. He is the recipient of several grants and awards including an NSF CAREER award and School of Engineering Faculty Research Excellence award.

Quantitative Analysis and Simulation of Latency-Related Pathways in a Murine Model of Mycobacterium tuberculosis Infection

Date: Friday, October 2nd, 2009
Time: 12 pm — 12:50 pm
Place: Centennial Engineering Center, Room 1041


Elebeoba E. May
Sandia National Lab

Abstract:
Tuberculosis (TB), caused by the bacterium Mycobacterium tuberculosis (Mtb), is a growing international health crisis. Mtb is able to persist in host tissues in a nonreplicating persistence (NRP)

In vitro models have identified enzymes and associated pathways up regulated during NRP, which are thought to supply energy through alternative pathways to the biosynthetically restricted pathogen (Wayne and Sohaskey, 2001). In the hypoxic model of NRP, the tubercle bacilli can circumvent the shortage of oxygen by developing alternative energy generation mechanisms using pathways such as those involved in the glyoxylate-to-glycine shunt (GtG) that may serve to replenish NAD (Wayne and Sohaskey, 2001; Wayne and Lin, 1982). Using Michaelis-Menten and mass action kinetics, data from MetaCyc, and initial rates from BRENDA, we are constructing a BioXyce model of M.tuberculosis that includes pathways identified through in vitro and in vivo studies. Simulation and analysis of NRP-relevant pathways will enable quantitative assessment of the molecular basis of latency and reactivation in murine models of infection.

This work is supported by an NIH/NHLBI grant 5K25HL75105-3. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

Bio:
Dr. Elebeoba E. May received her Ph.D. in computer engineering from North Carolina State University and is a Principle Member Technical Staff in Sandia National Laboratories Discrete Mathematics and Complex Systems Department. She is an Adjunct Research Assistant Professor in UNM-HSC',s Internal Medicine Department with a joint appointment as an Adjunct Research Assistant Professor in UNM's Electrical and Computer Engineering Department. Her research interests include the use and application of information theory, coding theory, and signal processing to the analysis of genetic regulatory mechanisms, the design and development of intelligent biosensors, and large-scale simulation and analysis of biological pathways and systems. Since joining SNL, Dr. May has provided computational biology leadership in the development of BioXyce, a large-scale systems biology simulation tool and continues leading development efforts in the application of BioXyce to simulation-based studies of host/pathogen interactions.

Dr. May is a recipient of the 2003 and 2008 Women of Color Research Sciences and Technology Award for Outstanding Young Scientist or Engineer and an NIH/NHLBI K25 Quantitative Research Career Development Grant to quantitatively decipher the genetic basis of latency in M. tuberculosis infection.

Exploiting and Providing Research Data: Finding Strategies to Help Researchers

Date: Friday, September 18, 2009
Time: 1pm
Place: Centennial Science and Engineering Library Cafe Area

Professor Malcolm Atkinson and Professor David De Roure

Topic Summary:
The presentation topic is critical for young scientists and any researcher using large data sets and/or researching Internet data. Data-intensive science is emerging as a leading new research method and a focus of NSF, DOE, DOD,NIH, NEH, and other national funding agencies. The speakers are both players in the global research environment. The effective use of data is key to advances in almost all disciplines. There are opportunities for significant advances as a result of the pervasive growth in digital data, communication and devices, however, there are many challenges in enabling researchers to become adept in this new fast-changing context.

Speakers:
Professor Malcolm Atkinson plays a leading role in UK science and data policy making, and is on numerous European Union advisory boards such as the Baltic Grid and GEON. He leads training and education of EU-funded projects such as the International Collaboration to Extend and Advance Grid Education. He is a member of the Global Grid Forum Steering Group and Data Area Director for GGF.

Professor David De Roure has developed myExperiment which is designed to preserve and share scientific workflows. A founding member of the School's Intelligence, Agents, Multimedia Group, he leads the e-Research activities and is a Director of the Pervasive Systems Centre, and is involved in the UK e-Science and e-Social Science programs. His work focuses on creating new research methods in and between multiple disciplines, and his projects draw on Web 2.0, Semantic Web and workflow

Computer Graphics 2.0 - The Virtual World is not Enough

Date: Friday, Sep 11th, 2009
Time: 12 am — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Prof. Dr.-Ing. Marcus Magnor
Computer Graphics Lab
TU Braunschweig

Abstract:
Expectations on computer graphics performance are rising continuously: whether in flight simulators, surgical planning systems, or computer games, ever more realistic rendering results are to be achieved at real-time frame rates. In fact, thanks to progress in graphics hardware as well as rendering algorithms, visual realism is today within reach of off-the-shelf PCs. With rapidly advancing rendering capabilities, the modeling process is becoming the limiting factor in computer graphics. Higher visual realism can be attained only by having available more detailed and accurate scene descriptions. So far, however, modeling 3D geometry and object texture, surface reflectance characteristics and scene illumination, character animation and emotion is a labor-intensive, tedious process. The cost of visually authentic content creation using conventional approaches increasingly threatens to stall further progress in realistic rendering applications.

In my talk, I will present an alternative modeling approach. I will discuss and exemplify different approaches on how to recover digital models from real-world imagery. The models may be derived either based on the physics of the scene, or by regarding perceptional consequences only. While the former approach yields physically meaningful information about the scene, approaches of the latter kind may allow for easier modeling and more natural-appearing rendering results. This opens up various new opportunities, extending the scope of computer graphics beyond virtual worlds to encompass visual reality.

Bio:
Marcus Magnor heads the Computer Graphics Lab of the Computer Science Department at Braunschweig University of Technology (TU Braunschweig). He received his BA (1995) and MS (1997) in Physics from the University of Würzburg and the University of New Mexico, respectively, and his PhD (2000) in Electrical Engineering from the Telecommunications Lab at the University of Erlangen. For his post-graduate studies, he joined the Computer Graphics Lab at Stanford University. In 2002, he established the Independent Research Group Graphics-Optics-Vision at the Max-Planck-Institut Informatik in Saarbrücken. He completed his habilitation and received the venia legendi in Computer Science from Saarland University in 2005. His research interests meander along the visual information processing pipeline, from image formation, acquisition, and analysis to image synthesis, display, perception, and cognition. Recent and ongoing research topics include video-based rendering, 3D-TV, augmented vision, video editing, optical phenomena, as well as astrophysical visualization (research).