UNM Computer Science

Colloquia



Ontologies in Biomedical Research

Date: Friday, December 17, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Jessica Turner

Mind Institute

Biomedical research has produced an enormous amount of data suitable for mining, analysis, and meta-analysis. As a byproduct of this, new databases of original data, published results, and atlases are constantly emerging. Yet communication and the integration of information within and across these data resources is limited. There are numerous existing efforts that aim to develop ontologies, standardized structures to facilitate the exchange of information: RadLex, an ontology of medical imaging acquisition strategies (http://radlex.org); lexicons and ontologies of neuroanatomical regions (e.g., NeuroNames and the Foundational Model of Anatomy (FMA)); full medical ontologies for clinical care concepts, such as the Systematized Nomenclature of Medicine (SNOMED) Ontology; descriptions of experimental methods and materials, such as the Ontology of Biomedical Investigation (OBI), the Cognitive Paradigm Ontology (CogPO); and many others. In this presentation, I will discuss the various motivations for developing ontologies within biomedical research, with examples drawn from several fields of research. In particular I will review the way biomedical research has led to multiple ontologies for different scientific domains, and the challenges that arise in coordinating biomedical ontology development around a set of core principles. The benefit of the efforts, however, is in the potential for automated reasoning over archived experimental data from multiple species and methodologies, to identify novel results.

Bio:
Jessica Turner received her PhD in Experimental Psychology in 1997 from the University of California, Irvine. Her research interests span from basic cognitive neuroscience of perception and memory to clinical studies of neuropsychiatric and degenerative disorders, and the knowledge representation systems needed to reason automatically over scientific findings from multiple domains. She recently joined the Mind Research Network as an Associate Professor in Translational Neuroscience, and has an appointment in the UNM Department of Psychiatry.

Artificial Cells as Reified Quines

Date: Friday, December 10, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Lance R. Williams

Assoc. Professor
Department of Computer Science
University of New Mexico

Cellular automata (CA) were initially conceived as a formal model to study self-replication in artificial systems. Although self-replication in natural systems is characterized by exponential population increase until exhaustion of resources, after more than fifty years of research, no CA-based self-replicator has come close to exhibiting such rapid population growth. We believe this is due to an intrinsic limitation of CA's, namely, the inability to model extended structures held together by bonds and capable of diffusion.

To address this shortcoming, we introduce a model of parallel distributed spatial computation which is highly expressive, infinitely scalable, and asynchronous. We then use this model to define a series of self-replicating machines. These machines assemble copies of themselves from components supplied by diffusion and increase in number exponentially until the supply of components is exhausted. Because they are both programmable constructors for a class of machines, and self-descriptions, we call these machines reified quines.

Bio:
Lance R. Williams received his BS degree in computer science from the Pennsylvania State University and his MS and PhD degrees in computer science from the University of Massachusetts. Prior to joining UNM, he was a post-doctoral scientist at NEC Research Institute. His research interests include computer vision and graphics, digital image processing, and neural computation.

Some Computational Science Experiences

Date: Friday, December 3, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Richard Barrett
Sandia National Laboratories
Scalable Computer Architectures

I will describe some of my experiences working in the computational sciences world, discuss how computer scientists can contribute, and discuss some of the science that can, and has, resulted from this work. This will be somewhat informal and lightweight, so grab some popcorn and put your feet up...

Bio:
Richard Barrett is a member of the Scalable Computer Architectures group at Sandia National Laboratories.

Prosocial preferences and the evolution of behavior within and between groups

Date: Friday, November 19, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Jeremy Van Cleve
Omidyar Postdoctoral Fellow
Santa Fe Institute

Although much is known about the evolutionary forces that promote or inhibit prosocial or cooperative behavior, much less in known about how the proximate mechanisms underlying behavior interact with those forces. In this talk, we will model a proximate mechanism of behavior based on social objectives or preferences. Using this model, we will show how prosocial preferences can evolve even when the payoffs individuals get are in direct conflict and argue that such preferences form the basis of emotions such as empathy. The evolution of these preferences depends on the level of responsiveness that they generate between individuals. We show that accounting for this responsiveness in structured populations resolves some of the controversy concerning whether Darwinian adaptations can occur at the level of the group.

Bio:
Jeremy Van Cleve is an Omidyar Postdoctoral Fellow at the Santa Fe Institute. He is generally interested in applying theoretical approaches to conceptual questions in evoultionary biology and ecology. Some of his current research topics include the evolution of genomic imprinting, the evolution of prosocial behaviors, evolution in changing environments, and theoretical methods in evolutionary theory. Jeremy received his Ph.D. in Biological Sciences from Stanford University in 2009.

Software Protection and Assurance Using Process-level Virtualization

Date: Friday, November 12, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Jack Davidson
Professor of Computer Science
University of Virginia

In this talk, I give a brief introduction to software dynamic translation (SDT), a powerful technology that has proven useful for addressing various computer security issues. To illustrate the power and utility of software dynamic translation, I describe the application of software dynamic translation to two aspects of security cybersoftware protection and software assurance. Software protection is concerned with protecting intellectual property and preventing an adversary from tampering with a software application. Software assurance is concerned with ensuring that software is free from system vulnerabilities. I conclude the talk by briefly describing a new project, PEASOUP (Preventing Exploits against Software of Uncertain Provenance) that relies heavily on SDT. I enumerate some of the key research challenges that must be addressed for PEASOUP to be successful.

Bio:
Jack Davidson is a Professor of Computer Science at the University of Virginia. He joined the faculty in 1981 after receiving his Ph.D. in Computer Science from the University of Arizona. Professor Davidson's research interests include compilers, programming languages, computer architecture, embedded systems, and computer security. He is the author of more than 140 papers in these fields. Professor Davidson's current research is focused on computer security. He is the principal investigator on several ongoing grants from the National Science Foundation and other agencies to develop comprehensive methods for protecting software from malicious attacks.

Professor Davidson is a Fellow of the Association for Computing Machinery (ACM). He is past chair of ACM's Special Interest Group on Programming Languages (SIGPLAN). He currently serves as co-chair of ACM's Publication Board, which oversees ACM's portfolio of publications (39 journals, 4 magazines, and some 360 conferences) and the administration and development of the Digital Library.

Professor Davidson is co-author of two best-selling introductory programming textbooks, C++ Program Design: An Introduction to Object-Oriented Programming, 3rd edition and Java 5.0 Program Design: An Introduction to Object-Oriented Programming, 2nd edition. In 2008, he won the IEEE Computer Society Taylor L. Booth Education Award for excellence in computer science and engineering education. Professor Davidson is currently developing an innovative undergraduate curriculum focused on computer security.

Coordination in Distributed Software Development

Date: Friday, November 5, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Anita Sarma
Assistant Professor, Computer Science and Engineering Department
University of Nebraska, Lincoln

Coordination is inherent in any group work and software development is no exception. In this talk, I will trace the evolution of support for coordination for software development by introducing the different coordination paradigms that have emerged. I will then discuss my experiences in building two coordination tools, Tesseract and PalantC-r. Tesseract is an interactive environment that enables developers to explore and understand various relationships that exist among different project entities and such as artifacts, developers, bugs, and communications in a software project. PalantC-r augments existing configuration management systems with workspace awareness to inform developers of ongoing changes and their effects so as to prompt users to self-coordinate. Lessons from both these tools will be framed in the broader context of coordination needs in software development.

Bio:
Anita Sarma is an Assistant Professor at the Computer Science and Engineering Department the University of Nebraska, Lincoln. She received her PhD from Department of Informatics in the Donald Bren School of Information and Computer Sciences at the University of California, Irvine. Her advisor was Professor Dr. Andre van der Hoek. She then completed a two year post doc at Carnegie Mellon University, School of Computer Science with Dr. James Herbsleb. Her research interests lie primarily in the intersection of software engineering and computer-supported cooperative work, focusing on understanding and supporting coordination as an interplay of people and technology. Her research is driven by a strong desire to offer practical solutions to real-world problems through the construction of novel software tools and environments.

BAR Protocols for MAD Services

Date: Friday, October 29, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Mike Dahlin
Professor, Department of Computer Science
University of Texas at Austin

MAD services, spanning Multiple Administrative Domains, coordinate machines controlled by different organizations to provide a desired service. Examples includelude peer-to-peer systems, Internet routing, and cloud storage. MAD services require machines to cooperate and rely on each other. However, some machines may crash or malfunction; some software may be buggy or misconfigured; and some users may be selfish or even malicious. How do we build distributed services that work when no node has an a priori guarantee that any of the ofther nodes will follow the protocol?

In this talk we examine Byzantine Altruistic Rational (BAR) protocols. To meet the needs of MAD environments, BAR protocols draw from distributed systems and game theory to tolerate both *Byzantine* behaviors when broken, misconfigured, or malicious nodes arbitrarily deviate from their specialized protocol to increase their own utility. In this talk, we first argue BAR is the right model for constructing MAD services. Then, we explore the challenges of constructing such protocols by examining several case study services. Finally, we identify several limitations and open research questions.

Bio:
Mike Dahlin's research interests include Internet- and large-scale services, fault tolerance, security, operating systems, distributed systems, and file systems. He received his PhD in Computer Science from the University of California at Berkeley in 1995 under the supervision of professors Tom Anderson and Dave Patterson, and he joined the faculty at the University of Texas at Austin in 1996. He received the NSF CAREER award in 1998, received the Alfred P. Sloan Research Fellowship interest 2000, and held a departmental Faculty Fellowship in Computer Science interest 1999-2002 and 2004-2007. Professor Dahlin has published over 50 scholarly works, including three award papers at SOSP, two at WWW, and one at each of USENIX, SASO, and WCW.

High Performance Computing and Informatics

Date: Friday, October 22, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Ron A. Oldfield

Ron A. Oldfield
Senior Member Technical Staff
Scalable System Software Sandia National Laboratories

Parallel supercomputing platforms have traditionally been used to address complex scientific problems; however, the recent interest in informatics problems, especially related to cyber security and social networking, has motivated the inFormatics community to consider HPC platforms as a viable platform for informatics problems. In this talk, I will discuss some of the challenges associated with deploying informatics codes on capability-class supercomputers, and I will present results from two separate efforts at Sandia: porting a large-scale multilingual document clustering application to Cray XT systems, and development of a hybrid Cray/Netezza platform for fast access to data-warehouse appliances from parallel HPC codes.

Bio:
Ron A. Oldfield is a senior member of the technical staff at Sandia National Laboratories in Albuquerque, NM. He received the B.Sc. in computer science from the University of New Mexico in 1993. From 1993 to 1997, he worked in the computational sciences department of Sandia National Laboratories, where he specialized in seismic research and parallel I/O. He was the primary developer for the GONII-SSD (Gas and Oil National Information Infrastructure--Synthetic Seismic Dateataset) project and a co-developer for the R&D 100 award winning project "Systemsalvo", a project to develop a 3D finite-difference prestack-depth migration algorithm for massively parallel architectures. From 1997 to 2003 he attended graduate school at Dartmouth college and received his Ph.D. in June, 2003. In September of 2003, he returned to Sandia to work in the Scalable Computing Systems department. He currently leads the Lightweight File System project and the testing and integration effort of the SciDAC Scalable Systems System Software project. His research interests include parallel and distributed computing, parallel I/O, and mobile computing.

Fault Resilience in Exascale Systems

Date: Friday, October 8, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Rolf Riesen

Rolf Riesen, Ph.D.
Principal Member Technical Staff
Scalable Computing Systems Sandia National Laboratories

Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today applications use checkpoint/restart to recover from these faults, but even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total run time saving checkpoints, restarting, and redoing work that was lost.

Redundant computing is a method to allow an application to continue working even when failures occur. Instead of each failure causing an application interrupt which causes lost work and requires restart time, multiple failures can be absorbed by the application until redundancy is exhausted. In this talk I will present a method to analyze the benefits of redundant computing, present simulation results of the cost, and discuss a prototype MPI implementation.

Bio:
Rolf Riesen grew up in Switzerland and learned electronics there. He got interested in software because he got tired of burning his fingers on a soldering iron and got a master's and a Ph.D. in computer science from the University of New Mexico (UNM). His advisor was Barney Maccabe who now leads the Computer Science and Mathematics Division CSM at Oak Ridge National Laboratory.

In 1990 he started working with a group at Sandia while he was a research assistant at UNM and, after finishing his Master's, he was hired as a member of the technical staff in 1993. Throughout this time he designed, implemented, and debugged various pieces of system software starting with SUNMOS on the nCUBE 2 and Puma on the Intel Paragon. He created his own cluster, Cplant, before large clusters were common, and was involved in the Puma successors: Jaguar, Cougar, and Catamount for the Intel ASCI Red machine and the Cray XT3 Red Storm.

Creating Efficient, Robust, and Resilient Peer-to-Peer Systems

Date: Friday, October 1, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

David Zage

Senior Member of Technical Staff
Sandia National Laboratories

The rapid growth of communication environments such as the Internet has spurred the development of a wide range of systems and applications based on peer-to-peer ideologies. As these applications continue to evolve, there is an increasing effort towards improving their overall performance. This effort has led to the incorporation of measurement-based adaptivity mechanisms and network awareness into peer-to-peer applications, which can greatly increase peer-to-peer performance and dependability. Unfortunately, these mechanisms are often vulnerable to attack, making the entire solution less suitable for real-world deployment. In this work, we study how to create robust systems components for adaptivity, network awareness, and responding to identified threats. These components can form the basis for creating efficient, high-performance, and resilient peer-to-peer systems.

Bio:
David Zage received his B.S. and Ph.D. in Computer Science from Purdue University in 2004 and 2010, respectively. While at Purdue, he was a member of the Dependable and Secure Distributed Systems Laboratory (DS^2 ). His research interests include distributed systems, fault-tolerant protocols, overlay networks, routing, wireless mesh networks, and insider threats. David has recently joined the staff of Sandia National Laboratories in the Cyber Analysis R&D Solutions group as of August 2010. He is also a member of the ACM and IEEE Computer Society.

Conflict in Networks

Date: Friday, September 24, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Jared Saia

Assoc. Professor
Department of Computer Science
University of New Mexico

This talk will survey work being done in my research group. We will cover work on the following three problems. First, designing a scalable algorithm for consensus and the Byzantine agreement problem. Second, designing networks and algorithms that are robust to web-censorship. Third, a game theoretic analysis of the secret sharing problem, in which n players each have a share of a secret and each player would prefer to learn the secret themselves but not have any other player learn it.

Bio:
Jared Saia is an Assoc. Professor in the UNM CS department. His broad research interests are in theory and algorithms with strong interests in distributed algorithms, game theory, security, and spectral methods. A current interest is determining how large groups can function effectively when there is no leader.

PLFS: A Checkpoint Filesystem for Parallel Applications

Date: Friday, September 10, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Elections, Public Policy and Science: A Natural Place for Many Stove Pipes

Date: Friday, September 17, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Dr. Lonna Rae Atkeson

Professor
Department of Political Science
University of New Mexico

The visible election problems in New Mexico, Ohio, and especially Florida in the 2000 presidential election led political scientists, as well as statisticians, computer scientists, law professors, anthropologists, political activists and election administrators to ask important questions about election administration. The questions stem from a normative desire to improve the election system to maintain system legitimacy and to ensure that elections are administered fairly and without fraud. Academics are pressing the election community to base their election reform policies on empirical information about the process and its impact on voters. Therefore, we wish to know how voters experience the election process, how administrators run elections, and how secure the voting process is from fraud and coercion. Over the last several elections in New Mexico, we have engaged in a variety of research activities to begin answering these fundamental questions. Our activities include: election observations, voter surveys, poll worker surveys, examining residual votes, election audit observations, and a pilot post election audit project. My talk will focus on our pilot post election audit work. In the winter of 2008, we counted nearly 50000 ballots multiple times by machine and by hand and examined the differences both within and between counting methods. We found that machines and humans both make errors in counting, though they appear random. This work informs both the public debate on the accuracy and integrity of a particular paper ballot voting system and on the public policy process by piloting procedures and making specific recommendations for implementing post election audits. Given the large array of equipment, organization, management, education, etc that is involved in the running of election more interdisciplinary and cross-disciplinary research work is needed to gain perspective and determine the best election reform policies.

Bio:
Dr. Lonna Rae Atkeson is a Professor and Regents' Lecturer in the Political Science Department at the University of New Mexico. She is also currently director of the Center for the Study of Voting, Elections and Democracy. Her expertise is in the area of campaigns, elections, election administration, public opinion and political behavior and has written numerous articles, book chapters, monographs and technical reports on these topics. Her research emphasizes the role contextual factors play in shaping attitudes and behaviors of political actors. She was named an "Emerging Scholar" by the Political Parties and Organizations Section of the American Political Science Association and has received repeated funding from the National Science Foundation, the Pew Charitable Trusts, and the JEHT Foundation. She holds a BA in political science from the University of California, Riverside and a Ph.D. in political science from the University of Colorado, Boulder.

PLFS: A Checkpoint Filesystem for Parallel Applications

Date: Friday, September 10, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

John Bent

Researcher
Los Alamos National Laboratories

Parallel applications running across thousands of processors must protect themselves from inevitable system failures. Many applications insulate themselves from failures by checkpointing. For many applications, checkpointing into a shared single file is most convenient. With such an approach, the size of writes are often small and not aligned with file system boundaries. Unfortunately for these applications, this preferred data layout results in pathologically poor performance from the underlying file system which is optimized for large, aligned writes to non-shared files. To address this fundamental mismatch, we have developed a virtual parallel log structured file system, PLFS. PLFS remaps an application's preferred data layout into one which is optimized for the underlying file system. Through testing on PanFS, Lustre, and GPFS, we have seen that this layer of indirection and reorganization can reduce checkpoint time by an order of magnitude for several important benchmarks and real applications without any application modification. The full paper, which was a best paper nominee at SC '09, can be downloaded at: http://institutes.lanl.gov/plfs.

Bio:
John Bent John's first real job was moving chicken on an assembly line to make extra money for college. After graduating with a bachelor's in anthropology from Amherst College, John spent two years as a public school librarian in Palau as a Peace Corps Volunteer. He then worked for a personal injury lawyer for a year while applying to CS graduate school. Wisconsin accepted him and seven years later, John finished his dissertation about data-aware batch schedulers. Since then, John has spent the last five years working on parallel storage systems at Los Alamos National Labs. PLFS grew out of that work.

Scheduling Movements of Multiple Mobile Sinks to Maximize Wireless-Sensor-Network Lifetime

Date: Friday, September 3, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Cynthia Phillips

Senior Scientist
Sandia National Laboratories

Abstract:
Unattended sensor networks typically watch for some phenomena such as volcanic events, forest collection point location is static, sensor nodes that are closer to the collection point relay far more messages than those on the periphery. Assuming all sensor nodes have roughly the same capabilities, those with high relay burden experience battery failure much faster than the rest of the network. However, since their death disconnects the live nodes from the collection point, the whole network is then dead.

We consider the problem of moving a set of collectors (sinks) through a wireless sensor network to balance the energy used for relaying messages, maximizing the lifetime of the network. We show how to compute an upper bound on the lifetime for any instance using linear and integer programming. We present a centralized offline heuristic that finds sink movement schedules that produce network lifetimes within 1.4% of the upper bound for realistic settings. We also present a distributed online heuristic that produces lifetimes at most 25.3% below the upper bound for steady traffic.

This research is typical of the interdisciplinary research at Sandia National Laboratories. It draws upon techniques from operations research (linear and integer programming), combinatorial optimization (traveling salesman, graph matching), and homegrown software tools to provide a practical solution for a realistically-sized network management problem.

This is joint work with Stefano Basagni (Northeastern University), Alessio Carosi (Universita di Roma "La Sapienza"), and Chiara Petrioli (Universita di Roma "La Sapienza")

Bio:
Cynthia Phillips is a senior scientist in the Discrete Mathematics & Complex Systems Department at Sandia National Laboratories. She received a PhD in computer science from MIT in 1990. In her 20 years at Sandia National Laboratories she has conducted research in combinatorial optimization, algorithm design and analysis, and parallel computation with applications to scheduling, network and infrastructure surety, integer programming, graph algorithms and analysis, vehicle routing, computational biology, computer security, quantum computing, wireless networks, and experimental algorithmics.

Boltzmann Solution Concepts, epsilon Logic, and the Emergence of Timescales in an Animal Society

Date: Friday, August 27, 2010
Time: 12noon — 12:50 pm
Place: Centennial Engineering Center, Room 1041

Simon DeDeo

Omidyar Postdoctoral Fellow
Santa Fe Institute

Abstract:
Quantitative data on the behavior of animals in larger (N~50) groups allow for the detection and study of new phenomena that arise from the rational and perceptual capabilities of individuals acting in subgroup contexts. Here we report on three new approaches to a particular set of observations, of pigtailed macaques at the Yerkes Primate Research Center, that illuminate the complexity of group behavior in terms of game theory (Boltzmann Solution Concepts), noisy computational processes (epsilon-Logic), and the interaction of different environmental, social, physiological and cognitive mechanisms in the time domain (Lomb-Scargle periodogram analysis of timescales.)

Bio:
Simon DeDeo is an Omidyar Postdoctoral Fellow at the Santa Fe Institute; he uses tools from statistical physics and complex systems to study problems in animal and human behavior.