UNM Computer Science

Colloquia Archive - Fall 2006



System Challenges in Scale-out Platforms

Date: Friday, December 8th, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

Dilma Da Silva
IBM T.J. Watson Research Center

Abstract:
This presentation will describe current efforts at IBM Research to advance scale-out platforms for traditional and emerging commercial workloads. It will discuss our experience so far in exploring specialized operating system environments for achieving manage simplification and performance optimization.

Bio:
Dr. Dilma Da Silva is a Research Staff Member at IBM T.J. Watson Research Center, Yorktown Heights, NY. She is part of the K42 team. Her primary research interests are operating systems, distributed computing, and high-end computing. She received her PhD in Computer Science in 1997 from Georgia Institute of Technology. She was an Assistant Professor at the Department of Computer Science at University of Sao Paulo, Brazil, from 1997 to 2000.

Computer Science Town Hall

Date: Friday, December 1st, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

Stephanie Forrest, Chair,
UNM Department of Computer Science

Stephanie is taking this colloquium meeting for an open forum with the UNM CS students. As department chair, she would like to discuss the current "state of the department" and, more importantly, to hear your concerns, questions, and aspirations. Please come prepared to discuss your thoughts on where the department is, where it's going, and how it could be improved. All graduate students are encouraged to attend, even if you are not currently registered for colloquium.

Nanotechnology, Reversible Computing, and the Fate of the Universe

Date: Friday, November 17th, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

Sarah Murphy
University of Notre Dame

Abstract: There are several important problems that scientists require more computing power to address. These include things like climate modeling, earthquake mitigation, and various simulation of physics problems. Leading scientists estimate that including all the important science, using the desired level of granularity, and running the ensembles of simulations needed will require zettaflops of computing power. Even if all the power of the Hoover Dam were harnessed, end-of-the-roadmap CMOS and current design methodologies would not be able to provide this level of computing power.

This talk will introduce why novel technologies and design methodologies are needed; discuss a particular device, quantum-dot cellular automata (QCA); and dive into the controversies and possibilities of reversible computing.

Bio: Sarah Murphy is a Ph.D. Candidate in the Department of Computer Science and Engineering at the University of Notre Dame working for Peter Kogge looking at how to match emerging nano-scale devices with appropriate computing paradigms in order to solve hard problems better.

Mechanism Design: Truthful and Frugal Auctions

Date: Friday, November 10th, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

David Kempe
University of Southern California

Abstract
Due to the increasing interaction of users in networks, and the ability to conduct economic transactions and auctions easily and frequently, there has been a recent development of interest in the boundary between computation and game theory. Among the important developments has been a focus on computational aspects of auctions, and on the design of auctions with desirable properties, such as providing disincentives to lying or cheating, and maximizing the seller's revenue (or minimizing the buyer's costs). In this talk, specifically, we study mechanisms for auctions in which the auctioneer is trying to hire a team of agents to perform a complex task, and paying them for their work. As common in the field of mechanism design, we assume that the agents are selfish and will act in such a way as to maximize their profit, which in particular may include misrepresenting their true incurred cost. Our first contribution is a new and natural definition of the frugality ratio of a mechanism, measuring the amount by which a mechanism "overpays", and extending previous definitions to all set systems without monopolies.

After reexamining several known results in light of this new definition, we proceed to study in detail shortest path auctions and "r-out-of-k sets" auctions. We show that when individual set systems (e.g., graphs) are considered instead of worst cases over all instances, these problems exhibit a rich structure, and the performance of different mechanisms previously considered comparable may be vastly different. In particular, we show that the well-known VCG mechanism may be far from optimal in these settings, and we propose and analyze a mechanism that is always within a constant factor of optimal.

This talk represents joint work with Anna Karlin and Tami Tamir.

Bio:
David Kempe received his Ph.D. from Cornell University in 2003, and has been on the faculty in the computer science department at USC since the Fall of 2004. His primary research interests are in computer science theory and the design and analysis of algorithms, with a particular emphasis on social networks, distributed network algorithms, and game theoretic and pricing questions. He is a recipient of the NSF CAREER award.

Research Directions in Direct Cache Access (DCA) for I/O

Date: Friday, November 3rd, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

Ram Huggahalli & Amit Kumar
Corporate Technology Group, Intel

Abstract
The heart of this presentation will be our ISCA 2005 paper on ‘Direct Cache Access (DCA) for High Bandwidth Network I/O’. The context of our research work is recent I/O technologies such as PCI-Express and 10Gb Ethernet that enable unprecedented levels of I/O bandwidths in mainstream platforms. We observe that in traditional platforms, memory latency alone can limit the current architecture from matching 10 Gb inbound network I/O traffic. We propose a platform-wide method called Direct Cache Access (DCA) to deliver inbound I/O data directly into processor caches. We demonstrate that DCA provides a significant reduction in memory latency and memory bandwidth for receive intensive network I/O applications.

In this presentation, we will also present subsequent findings from a real-silicon prototype of Direct Cache Access for a dual-ported GbE NIC. Architectural directions for future usage models, related coherence protocols and their role in platform I/O performance will be reviewed.

Ram Huggahalli
Ram Huggahalli is a Network Systems Architect in Communication Technology Lab (CTL), an advanced development and research lab in Intel’s Corporate Technology Group. He has 15 years of experience in x86 processors, client/server systems architecture and in new technology evaluation in the areas of memory, graphics and networking. Ram’s recent research efforts involved adapting Intel platforms to network protocol processing at multi-10GbE rates. He has an MS degree in Electrical Engineering and also in Engineering Management from the University of Missouri.

Amit Kumar
Amit Kumar is a Research Software Engineer in Communication Technology Lab (CTL), an advanced development and research lab in Intel’s Corporate Technology Group. Amit has more than 4 years of experience is system software specializing in embedded and real-time systems programming and kernel programming. His current research explores network stack architecture and its relationship to memory hierarchies in general purpose computer systems. He has an MS degree in Computer Science.

Simple universal devices and computational complexity

Date: Friday, October 27, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

Damien Woods
Boole Centre for Research in Informatics, School of Mathematics
University College Cork, Ireland

Abstract
It has been an open question for forty years as to whether the smallest known universal Turing machines of Minsky, Rogozhin and others are efficient (polynomial time) simulators of Turing machines. These are some of the most intuitively simple computational devices and previously the best known simulations were exponentially slow. We show that these machines are indeed efficient simulators. As a related result we also show that Rule 110, a famous elementary cellular automaton, is efficiently universal.

This is joint work with Turlough Neary from NUI Maynooth, Ireland.

Bio
Damien Woods works on algorithms and computational complexity theory for optical computers, biomolecular computers, cellular automata, small universal Turing machines, and other computational devices

Massive Multithreading for Unstructured Graph Problems

Date: Friday, October 20, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

Jon Berry
Sandia National Lab

Abstract
Traditional distributed memory clusters are effective platforms for problems such as meshing, in which physical locality implies that an appropriate partitioning of the work is possible. However, in applications such as social network analysis, there may be no effective partitioning. Clusters also struggle with computations that get broken down into subproblems via divide-and-conquer or branch-and-bound algorithms, such that the subproblems themselves can be too large to store in local memory. The architectural paradigm of massive multithreading is an appealing alternative for large discrete problems without locality.

Sandia National Laboratories has been exploring massive multithreading as a solution technology for such problems. We will present an overview of this work in three parts: an introduction to the Cray MTA-2, the most mature existing instance of this paradigm, an introduction to the programming model for the MTA-2, and an introduction to algorithm design for massively multithreaded computers.

More specifically, we will present our work on the "MultiThreaded Graph Library (MTGL)" for graph query, and two multithreaded graph algorithms: a simple algorithm for connected components and a simple heuristic for subgraph isomorphism. These algorithms scale almost perfectly on the MTA-2 for randomized power-law graph instances. We will conclude with a discussion of future work in multithreaded algorithms and of the next-generation massively multithreaded computers under development.

Bio
Jon Berry got his PhD in computer science from Rensselaer Polytechnic Institute, did a postdoc at DIMACS, the Center for Theoretical Computer Science and Discrete Mathematics based at Rutgers University, then spent 8 years teaching computer science at Elon University in North Carolina and Lafayette College in Easton, PA. In 2004 he moved to Sandia National Laboratories, where he is a now a member of the technical staff. This talk concerns one of his current research interests: to explore the intersection between graph algorithms and high-performance computing architectures.

Autonomous Onboard Data Analysis and Prioritization -- Canceled

Date: Cancelled

Kiri Wagstaff
NASA/JPL

Abstract

A Continuous Topology for Image Representation

Date: Friday, September 29, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

Joe Kniss
Department of Computer Science, UNM

This talk will cover recent developments in image representation from a topological point of view. Computer display and digital image capture resolutions are growing rapidly. This work aims to address the emerging need for resolution independent display of image data. Vector image representations have been traditionally used for character/font display and illustrations. While these representations permit resolution independent display, they have a number of drawbacks that make them undesirable for use with general images. The proposed method integrates vector and raster image representations to leverage the advantages of both, and applies the mathematical machinery of algebraic topology to provide a rigorous theoretical foundation.

Attack-Resistant Algorithms for Massive Networks

Date: Friday, September 22, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

Jared Saia
Department of Computer Science, UNM

Abstract: In this talk, we will describe new attack-resistant algorithms for peer-to-peer networks. Our attack model is rather strong in that we assume that an omniscient and computationally unbounded adversary takes over up to a constant fraction of the peers in the network. Our algorithms are scalable in the sense that for every peer, all major resource costs (e.g. latency, number of bits sent and received, number of links to other peers) are polylogarithmic in the number of peers in the network. We present attack-resistant algorithms for the problems of leader election, Byzantine agreement and voting and describe a general framework that can be used to design such algorithms for other types of problems. We also describe many areas for future work.

These results are joint with Valerie King (U. Victoria), Vishal Sanwalani (U. Waterloo) and Erik Vee (IBM Labs), and describe work previously published in the Symposium of Discrete Algorithms (SODA), and that will appear in Foundations of Computer Science (FOCS).

Bio
Jared Saia obtained his PhD in Computer Science at the University of Washington in 2002 under Anna Karlin and is now an Assistant Professor at the University of New Mexico. His broad research interests are in theory and algorithms with a strong focus on designing provably good distributed algorithms for large-scale networks. He has published in a wide variety of the top conferences and journals in this area including: Foundations of Computer Science (FOCS), Principles of Distributed Computing (PODC), and Symposium on Discrete Algorithms (SODA).

Divisionality: A measure of fine-granularity community structure

Date: Friday, September 15, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

Todd Kaplan
Department of Computer Science, UNM

Abstract
Within the area of complex networks, there has been recent interest in the detection of community structure. In this context, a community refers to a group of densely connected vertices. Community structure algorithms aim to identify communities of nodes within a network. As a means for quantifying the success of a given community partitioning, a statistical measurement known as modularity was introduced. In this talk, I introduce an alternative measurement called divisionality. In comparison to modularity, when optimized on a given graph, this new measurement identifies finer-grained structure. I will compare the two measurements on networks of known structure and then explain how community structure plays a role in the larger scope of my research involving financial markets.

Bio
Todd Kaplan is a Ph.D. student in Computer Science at the University of New Mexico. He received his undergraduate degree in Biology at Brown University and holds graduate degrees in Computer Science (Cambridge University, UK) and Applied Mathematics (Oxford University, UK). He is a member of the Adaptive Computation group at UNM, led by Professor Stephanie Forrest.

Dual Photography

Date: Friday, September 8, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

Pradeep Sen
Electrical and Computer Engineering UNM

Abstract
The goal of interactive, photorealistic graphics will provide many interesting challenges in the upcoming years for both geometry-based and image-based rendering. In this talk, I will begin by presenting some of the results from my dissertation work on silhouette maps and show how they improve the appearance of textures rendered by real-time, geometry-based systems. However, the focus of this talk will be in the area of image-based rendering, where I address the specific problem of scene relighting. I present a novel approach to this problem called "dual photography," which exploits Helmholtz reciprocity to interchange the lights and cameras in a scene. With a video projector providing structured illumination, reciprocity permits us to generate pictures from the viewpoint of the projector, even though no camera was present at that location. The technique is completely image-based, requiring no knowledge of scene geometry or surface properties, and by its nature automatically includes all transport paths, including shadows, inter-reflections and caustics.

In its simplest form, the technique can be used to take photographs without a camera. I will show results of images we captured using only a projector and a photo-resistor. If the photo-resistor is replaced by a camera, we can produce a 4D dataset that allows for relighting with 2D incident illumination. Using an array of cameras we can produce a 6D slice of the 8D reflectance field that allows for relighting with arbitrary light fields. Since an array of cameras can operate in parallel without interference, whereas an array of light sources cannot, dual photography is fundamentally a more efficient way to capture such a 6D dataset than a system based on multiple projectors and one camera. Therefore, while current techniques must acquire the data necessary for relighting in linear time with respect to the number of samples, dual photography enables us to capture the same data in constant time. As a demonstration, I will show various scenes that we have captured and relit using our technique.

Bio
Pradeep Sen is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of New Mexico. He received his B.S. degree in Computer and Electrical Engineering from Purdue University in 1996 and his M.S. in Electrical Engineering from Stanford University in 1998 in the area of electron-beam lithography. After two years of working at a profitable startup company which he co-founded, he joined the Stanford Graphics Lab in the Fall of 2000 where he received his Ph.D. in Electrical Engineering in June 2006. His work in graphics includes discontinuous texture representations, real-time shading and dual photography and has been featured in Slashdot, New Scientist, CGWorld, CPU, and Technology Research News Magazine. His interests include real-time graphics and graphics hardware, global illumination algorithms, computational photography and display technology.

Stable Internet Routing Without Global Coordination

Date: Friday, September 1st, 2006
Time: 1 pm — 2:15 pm
Place: ME 218

Jennifer Rexford
Department of Computer Science
Princeton University
The Border Gateway Protocol (BGP) allows an autonomous system (AS) to apply diverse local policies for selecting routes and propagating reachability information to other domains. However, BGP permits ASes to have conflicting policies that can lead to routing instability. This talk proposes a set of guidelines for an AS to follow in setting its routing policies, without requiring coordination with other ASes. Our approach exploits the Internet's hierarchical structure and the commercial relationships between ASes to impose a partial order on the set of routes to each destination. The guidelines conform to conventional traffic-engineering practices of ISPs, and provide each AS with significant flexibility in selecting its local policies. Furthermore, the guidelines ensure route convergence even under changes in the topology and routing policies. Drawing on a formal model of BGP, we prove that following our proposed policy guidelines guarantees route convergence. We also describe how our methodology can be applied to new types of relationships between ASes, how to verify the hierarchical AS relationships, and how to realize our policy guidelines. Our approach has significant practical value since it preserves the ability of each AS to apply complex local policies without divulging its BGP configurations to others. The end of the talk briefly summarizes follow-up studies that have built on this work.