UNM Computer Science

Colloquia Archive - Fall 2004



Distance Computation on Gene Order Data

Date: December 9, 2004
Time: 11:00 - 12:15 pm
Location: Woodward 149

Krister Swenson <kswenson@cs.unm.edu>
Department of Computer Science

The genome of an organism can be represented as a sequence of signed numbers. A natural question to investigate is how related two genomes are based on certain biologically plausible operations on their sequences. The canonical operation that has been considered is the inversion (or reversal) and the relative similarity between two genomes has been measured by finding the minimum numberof inversions that turn one sequence into another.
Ten years ago Hannenhalli and Pevzner developed a rich theory and founda polynomial time algorithm to compute this distance for any two sequences, each with exactly one copy of a given number. Since then the theory has been refined and extended (to handle insertion/deletions) but there has existed no method to deal with two arbitrary sequences.

In this presentation we investigate what operations are needed in orderto find a plausible distance between arbitrary sequences and show the techniques we are developing to handle them.

Self-stabilizing Algorithms on Tree Networks

Date: Thursday December 2, 2004
Time: 11am-12:15pm
Location: Woodward 149

Prof. Fredrik Manne <nfmann@sandia.gov>
Department of Informatics
University of Bergen,
Bergen, Norway
and Sandia National Labs.
Albuquerque, NM

A distributed system can be modeled as an undirected graph $G=(V,E)$, where $V$ is the set of $n$ systems, or nodes, and $E$ is the set of links, or edges, interconnecting the systems together. In the self-stabilizing algorithmic paradigm, the nodes are the computational units and each node can only see its neighbors and itself, yet the system of simultaneously running algorithms must converge to a global state satisfying some desired property. This is similar to what one might experience in for instance an ad hoc network.

Problems that are typically straight-forward to solve using a sequential algorithm often require far more clever approaches in the self-stabilizing paradigm. The advantage of self-stabilizing algorithms is that even if the underlying structure of the graph should change through a fault or if the graph is dynamic in nature, the algorithm will converge to a new legal solution when the underlying graph structure stabilizes.

In this presentation we will give an introduction to self-stabilizing algorithms. We will also look at some new results on developing more efficient self-stabilizing algorithms for tree networks.

Automatic Generation of Loop Invariants

Date: Tuesday November 23, 2004
Time: 11am-12:15pm
Location: Woodward 149

Deepak Kapur <kapur@cs.unm.edu>

I will start with a bird's eye-view of my research activities. I will then switch to a research problem I have been working on during the last year -- automatic generation of loop invariants. Three different but related approaches developed in collaboration with Enric Rodriguez-Carbonell will be presented. The critical role of algebraic geometry in discovering these results will be discussed. It is proposed that equally rich theories about other data structures will have to be developed in order to make progress in automatically generating program invariants expressed in terms of properties of such data structures.

Exploring Grand Challenges in Trustworthy Computing

Date: Monday November 15, 2004
Time: 2:30-4:00pm
Location: Woodward 149

Eugene H. Spafford <spaf@purdue.edu>

Abstract:
We are presented with numerous challenges to make our information systems more secure, increase our confidence in our stored data, and protect the privacy of our personal information. However, under the steady barrage of attacks and flaws, it is sometimes difficult to think in terms of "big" challenges that can inspire us to make revolutionary, rather than evolutionary, strides.

In this presentation I will discuss a few of the trends and problems that have been occupying researchers and industry over the last few years. I will explain why advances against these challenges are unlikely to provide long-term improvements in the security of our infrastructure. From this, I will then discuss the results of the recent CRA Grand Challenges conference on information security, including some discussion of how we might proceed to make progress on each of these four grand challenges.

Bio:
Eugene H. Spafford is a professor of Computer Sciences at Purdue University, a professor of Philosophy (courtesy appointment), a professor of Communication (courtesy), a professor of Electrical and Computer Engineering (courtesy), and is Executive Director of the Center for Education and Research in Information Assurance and Security. CERIAS is a campus-wide multi-disciplinary Center, with a broadly-focused mission to explore issues related to protecting information and information resources. Spaf has written extensively about information security, cybercrime, software engineering, and professional ethics....More Bio

Technique and Tools for Developing Correct Large Programs and Running them Fast

Date: Thursday November 11, 2004
Time: 11am-12:15pm
Location: Woodward 149

Manuel Hermenegildo <herme@cs.unm.edu>
Departments of Computer Science & Electrical and Computer Engineering
University of New Mexico

Computer programs are some of the most complicated artifacts built by mankind --many arguably much more complicated than the computer itself. Because of this complexity, it takes large amounts of manpower to develop and maintain software, and, specially, to avoid errors. The objective of our research is to contribute to improving this situation by developing techniques and tools that help programmers write large, complex programs, in a shorter time, and at
the same time ensuring that the programs written are correct and result in efficient executions. I will present some details of how we attack this problem, by developing higher-level, multiparadigm programming languages, as well as more powerful formal techniques and practical tools for program debugging and verification. We also develop resource-aware, optimizing compilers which can produce efficient sequential code from programs written in very high-level languages as well as parallelize such programs automatically for running on multiprocessors and/or in distributed environments.

Machine Learning at UNM and Abroad

Date: Tuesday November 9, 2004
Time: 11 am-12:15 pm
Location: Woodward 149

Terran Lane {mailto address="terran@cs.unm.edu" text="terran@cs.unm.edu" encode="hex"}{literal}

You may not realize it, but machine learning techniques are popping up all around you. Since the 1970's, spacecraft, satellites, and aircraft have used Kalman filters to determine where they are. Tivo, Amazon, and Netflix all use "recommender systems" that employ more-or-less sophisticated forms of ML to help figure out what you want. Google uses ML clustering techniques to rank web pages and divide them into related topic groups.

In this talk, I'll give a brief overview of some machine learning background and applications, and then I'll turn it over to some of my current students to discuss their research. They'll each spend short segments describing their work in analysis of neuroimaging data, kernel computations, and bioinformatics.

Green Destiny: A 240-Node Energy-Efficient Supercomputer in Five Square Feet

Date: Thursday October 28, 2004
Time: 11am-12:15pm
Location: Woodward 149

Wu-chun Feng <feng@lnl.gov>
Los Alamos National Lab.

Abstract:
Since 1991, the performance per watt of a supercomputer has only improved 300-fold and the performance per square foot only 65-fold. Clearly, we are building less and less efficient supercomputers, thus resulting in the construction of massive machine rooms, and even, entirely new buildings. Furthermore, as these supercomputers continue to follow "Moore's Law for Power Consumption," the reliability of these systems continues to plummet, as per Arrenhius' equation when applied to microelectronics. To address these problems, we constructed Green Destiny, a 240-processor supercomputer that fits in a telephone booth and sips only 3 kilowatts of power. Green Destiny provides reliable supercomputing cycles while sitting in an 80-90 F dusty warehouse at 7,400 feet above sea level, and it does so without any special facilities, i.e., no air conditioning, no humidification control, no air filtration, and no ventilation.

Bio:
Dr. Wu-chun Feng --- or more simply, Wu --- is a technical staff member and team leader of Research & Development in Advanced Network Technology (RADIANT) in the Computer & Computational Sciences Division at Los Alamos National Laboratory and a fellow of the Los Alamos Computer Science Institute. His research interests span the areas of high- performance networking and computing. He received a B.S. in Electrical & Computer Engineering and Music (Honors) and an M.S. in Computer Engineering from the Pennsylvania State University in 1988 and 1990, respectively. He earned a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1996. He is a Senior Member of the IEEE.

How will Hamlet find the Holodeck?

Date: Tuesday October 26, 2004
Time: 11am-12:15pm
Location: Woodward 149

Ken Perlin <perlin@mrl.nyu.edu>
Computer Science Department
Media Research Laboratory
Center for Anvanced Technology
New York University

Abstract:
Character driven narrative, which drives such media as theatre, novels and cinema, is one of the primary ways that a society collectively explores that great human obsession: the human condition.

Computer-mediated interactive media can bring powerful new voices to this tradition, but only when two specific kinds of tools are sufficiently developed: (i) generation of contingent narratives that can shift interactively based on character, personality, psychological state, and kinship groups, and (ii) effective virtual actors that can convey subtle, dynamic and conflicting emotional states.

I will lay out a roadmap for achieving those goals, and I will show work that we've already done toward achieving powerful and believable virtual actors. I will conclude with some thoughts about what a psychologically mature interactive narrative medium might be like, and why it will utterly change our culture.

Bio:
Ken Perlin is a Professor in the Department of Computer Science at New York University. He is the Director of the Media Research Laboratory and the co-Director of the NYU Center for Advanced Technology. His research interests include graphics, animation, and multimedia. In January 2004 he was the featured artist at the Whitney Museum of American Art. In 2002 he received the NYC Mayor's award for excellence in Science and Technology and the Sokol award for outstanding Science faculty at NYU. In 1997 he won an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television. In 1991 he received a Presidential Young Investigator Award from the National Science Foundation.

Dr. Perlin received his Ph.D. in Computer Science from New York University in 1986, and a B.A. in theoretical mathematics from Harvard University in 1979. He was Head of Software Development at R/GREENBERG Associates in New York, NY from 1984 through 1987. Prior to that, from 1979 to 1984, he was the System Architect for computer generated animation at Mathematical Applications Group, Inc., Elmsford, NY, where the first feature film he worked on was TRON.

Research on IT infrastructure

Date: Thursday October 21, 2004
Time: 11am-12:15
Location: Woodward 149

Cris Pedregal Martin <cris@cs.unm.edu>

In this talk I will present my research towards supporting infrastructure-grade Information Technology. I will discuss current work on three somewhat related areas: recovery support, performance tradeoff shifts, and privacy in upcoming location-aware services. For each area I will outline the state of the art, present challenges, and discuss my current approach.

Two Problems in Computer Vision and Graphics.

Lance Williams <williams@cs.unm.edu>

I will discuss my work on two problems in computer vision and graphics.

Computer vision: A visual computation is Euclidean invariant if every rotation and translation of its input produces an equal rotation and translation of its output. Is it possible for a network built from a finite number of discrete components (such as the human visual cortex) to implement visual computations which are invariant to continuous rotations and translations of its input? I will show that the answer to this problem is yes for an important problem in human visual information processing.

Computer graphics: The power of wavelet transforms derives from their ability to compactly represent discrete periodic functions defined on rectangular domains. We describe a wavelet transform suitable for functions defined on surfaces modeled as meshes. We demonstrate analysis and synthesis of functions of a sphere, and then apply our transform to a problem in computer graphics, namely, synthesizing texture on surfaces without distortion or discontinuity.

Inference of, for and by the Web - Machine Learning Challenges at Google

Date: Tuesday October 19, 2004
Time: 11am-12:15pm
Location: Woodward 149

David Cohn <david.cohn@somerandom.com>
Senior Research Scientist at Google
(joint work with 1000 other Googlers)

Abstract:
The web is one of the largest and most lucrative data sets in the world. It is also, remarkably, free - anyone can access it, and anyone can add to it. These attributes give rise to unique challenges and opportunities for a anyone trying to organize and deliver web-based information.

I will discuss a few such challenges faced by Google, including adversarial information retrieval and spelling correction without a "correct" answer. I'll describe how we're applying machine learning and statistics to solve them, what ongoing challenges we face, and what it's like being in the heart of a company searching terabytes of data to serve over 200 million queries a day.

Bio:
David Cohn is a senior research scientist at Google, where he works on link analysis and machine learning problems related to search quality. He holds a Ph.D. in Computer Science from the University of Washington. Prior to joining Google, David studied as a postdoc at MIT, served as visiting faculty at University of Oregon and CMU, and worked for a variety of startups doing everything from workflow management to digital music recommendation. But not all at the same time.

Algorithms for Intenstiy-Modulated Radiation Therapy in 3D and 4D

Date: Tuesday October 12, 2004
Time: 11am-12:15pm
Location: Woodward 149

Shuang (Sean) Luan <sluan@cs.unm.edu>

Abstract:
Intensity-modulated radiation therapy (IMRT) is a modern cancer treatment technique and aims to deliver a highly conformal dose to a target tumor while sparing the surrounding sensitive structures. A key to performing IMRT is the accurate and efficient delivery of discrete dose distributions using the linear accelerator and the multileaf collimator. The MLC segmentation problems arise in such scenarios, whose goal is to compute a treatment plan that delivers the given dose distributions in the minimum amount of time.

In this talk, I will first present some new MLC segmentation algorithms and software. Our new algorithms, based on a novel unified approach and geometric optimization techniques, are very efficient and guarantee the optimal quality of the output treatment plans. Comparisons between our software with commercial planning systems and the current most well known leaf sequencing algorithm in medical literature showed substantial improvement. The treatment plan computed by our software not only takes much less delivery times (up to 75% less) but also has much better treatment quality. Our software has already been used for treating cancer patients at a couple of medical centers.

During the course of radiotherapy, the patient anatomy and position tend to vary from that have been used for treatment planning. This gives rise to a possibility of insufficient dose coverage to tumors and overdose to the surround sensitive structures, which may eventually lead to treatment failures. So, in the second part of this talk, I will discuss some of the future directions of our research, especially in the area of 4-D IMRT, in which the organ and tumor motions are considered in the treatment planning.

ParoC++: An Object­oriented model for adaptive HPC on the Grid

Date: Thursday October 7, 2004
Time: 11am-12:15pm
Location: Woodward 149

Dr. Pierre Kuonen <Pierre.Kuonen@eif.ch>
University of Applied Sciences of Fribourg (EIA-FR)
Switzerland

Abstract:
Adaptive utilization of resources in a highly heterogeneous computational environment such as the GRID is a difficult question. In this talk we will present an object-oriented approach to the solution using requirement-driven parallel objects. Each parallel object is a self-described, shareable and passive object that resides in a separate memory address space. The allocation of the parallel object is driven by the constraints on the resource on which the object will live. A new parallel programming paradigm is presented in the context of ParoC++ - a new parallel object oriented programming environment for high performance distributed computing. ParoC++ extends C++ for supporting requirement-driven parallel objects and a runtime system that provides services to run ParoC++ programs in distributed environments.

Using Network Tomography to Determine Link Delays in a Tunneled Network

Date: Tuesday October 5, 2004
Time: 11am-12:15pm
Location: Woodward 149

Hal Burch <hburch+@cs.cmu.edu>
School of Computer Science
Carnegie Mellon University

Abstract:
Network tomography is analyzing measurements of the combination of factors to separate out the individual components. An example of network tomography is using end-to-end delay measurements, which are the sum of delays over multiple links, to determining the delay across individual links. This type of network tomography is of interest to network operators, as it obviates the need to deploy measurement systems at all nodes within a network. Previous network tomography techniques require deploying monitors on a large fraction of the nodes of the network, which can be expensive and intrusive on large commercial networks. Moreover, past network tomography algorithms were based on assumptions such as symmetric delays, tree-like topologies, and the ability to send packets to internal nodes.

This talks will present a system to determine link delays within a tunneled network that requires only one measurement host. Using linear programming techniques, our algorithm does not require packets to be sent to internal nodes or assumptions of symmetry or a tree-like topology. This talk will also present the results from the deployment of our system on one of AT&T's networks.

This is joint work with Chris Chase and Albert Greenberg of AT&T Labs.

Exploring Grand Challenges in Trustworthy Computing

Date: Monday November 15, 2004
Time: 11am-12:15 pm
Location: FEC 141

Eugene H. Spafford <{mailto address="spaf@perdue.edu" text="spaf@perdue.edu" encode="hex" }>

Abstract:
We are presented with numerous challenges to make our information systems more secure, increase our confidence in our stored data, and protect the privacy of our personal information. However, under the steady barrage of attacks and flaws, it is sometimes difficult to think in terms of "big" challenges that can inspire us to make revolutionary, rather than evolutionary, strides.

In this presentation I will discuss a few of the trends and problems that have been occupying researchers and industry over the last few years. I will explain why advances against these challenges are unlikely to provide long-term improvements in the security of our infrastructure. >From this, I will then discuss the results of the recent CRA Grand Challenges conference on information security, including some discussion of how we might proceed to make progress on each of these four grand challenges.

Bio:
Eugene H. Spafford is a professor of Computer Sciences at Purdue University, a professor of Philosophy (courtesy appointment), a professor of Communication (courtesy), a professor of Electrical and Computer Engineering (courtesy), and is Executive Director of the Center for Education and Research in Information Assurance and Security. CERIAS is a campus-wide multi-disciplinary Center, with a broadly-focused mission to explore issues related to protecting information and information resources. Spaf has written extensively about information security, cybercrime, software engineering, and professional ethics. He has published over 100 articles and reports on his research, has written or contributed to over a dozen books, and he serves on the editorial boards of most major infosec-related journals.

In his career to date, Professor Spafford and his students are credited with a number of security "firsts," including the first open security scanner, the first widely-available intrusion detection tool, the first integrity-based control tool, the first multistage firewall, the first formal bounds on intrusion detection, the first reference model of firewalls, and some of the first work in vulnerability classification databases. Much of the current security product industry can therefore be viewed as based, in part, on his past research. His current research is directed towards issues of public policy and information security, architecture and construction of highly-secure systems, and cyberforensic technologies.

Dr. Spafford is a Fellow of the ACM, Fellow of the AAAS, Fellow of the IEEE, and is a charter recipient of the Computer Society's Golden Core award. In 2000, he was named as a CISSP, honoris causa. He was the year 2000 recipient of the NIST/NCSC National Computer Systems Security Award, generally regarded as the field's most significant honor in information security research. In 2001, he was named as one of the recipients of the "Charles B. Murphy" awards and named as a Fellow of the Purdue Teaching Academy, and in 2003 was named to the "Book of Great Teachers" -- thus receiving all three of the University's highest awards for outstanding teaching. In 2001, he was elected to the ISSA Hall of Fame, and he was awarded the William Hugh Murray medal of the NCISSE for his contributions to research and education in infosec. He is a 2003 recipient of the Air Force medal for Meritorious Civilian Service. In 2004, Spaf was named as the recipient of the IEEE Computer Society's Taylor Booth medal, and of the ACM SIGCAS's "Making a Difference" award.

Among his many activities, Spaf is co-chair of the ACM's U.S. Public Policy Committee, is a member of the Board of Directors of the Computing Research Association, and is a member of the President's Information Technology Advisory Committee (PITAC). He is a member of the FBI's Regional Computer Forensic Laboratory program, and of several corporate boards of advisors.

Machine Learning at UNM and Abroad

Date: Tuesday November 9, 2004
Time: 11 am-12:15 pm
Location: Woodward 149

Terran Lane <{mailto address="terran@cs.unm.edu" text="terran@cs.unm.edu" encode="hex"}>

You may not realize it, but machine learning techniques are popping up all around you. Since the 1970's, spacecraft, satellites, and aircraft have used Kalman filters to determine where they are. Tivo, Amazon, and Netflix all use "recommender systems" that employ more-or-less sophisticated forms of ML to help figure out what you want. Google uses ML clustering techniques to rank web pages and divide them into related topic groups.

In this talk, I'll give a brief overview of some machine learning background and applications, and then I'll turn it over to some of my current students to discuss their research. They'll each spend short segments describing their work in analysis of neuroimaging data, kernel computations, and bioinformatics.

Green Destiny: A 240-Node Energy-Efficient Supercomputer in Five Square Feet

Date: Thursday October 28, 2004
Time: 11am-12:15pm
Location: Woodward 149

Wu-chun Feng <feng@lnl.gov>
Los Alamos National Lab.

Abstract:
Since 1991, the performance per watt of a supercomputer has only improved 300-fold and the performance per square foot only 65-fold. Clearly, we are building less and less efficient supercomputers, thus resulting in the construction of massive machine rooms, and even, entirely new buildings. Furthermore, as these supercomputers continue to follow "Moore's Law for Power Consumption," the reliability of these systems continues to plummet, as per Arrenhius' equation when applied to microelectronics. To address these problems, we constructed Green Destiny, a 240-processor supercomputer that fits in a telephone booth and sips only 3 kilowatts of power. Green Destiny provides reliable supercomputing cycles while sitting in an 80-90 F dusty warehouse at 7,400 feet above sea level, and it does so without any special facilities, i.e., no air conditioning, no humidification control, no air filtration, and no ventilation.

Bio:
Dr. Wu-chun Feng --- or more simply, Wu --- is a technical staff member and team leader of Research & Development in Advanced Network Technology (RADIANT) in the Computer & Computational Sciences Division at Los Alamos National Laboratory and a fellow of the Los Alamos Computer Science Institute. His research interests span the areas of high- performance networking and computing. He received a B.S. in Electrical & Computer Engineering and Music (Honors) and an M.S. in Computer Engineering from the Pennsylvania State University in 1988 and 1990, respectively. He earned a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1996. He is a Senior Member of the IEEE.

How will Hamlet find the Holodeck?

Date: Tuesday October 26, 2004
Time: 11am-12:15pm
Location: Woodward 149

Ken Perlin <perlin@mrl.nyu.edu>
Computer Science Department
Media Research Laboratory
Center for Anvanced Technology
New York University

Abstract:
Character driven narrative, which drives such media as theatre, novels and cinema, is one of the primary ways that a society collectively explores that great human obsession: the human condition.

Computer-mediated interactive media can bring powerful new voices to this tradition, but only when two specific kinds of tools are sufficiently developed: (i) generation of contingent narratives that can shift interactively based on character, personality, psychological state, and kinship groups, and (ii) effective virtual actors that can convey subtle, dynamic and conflicting emotional states.

I will lay out a roadmap for achieving those goals, and I will show work that we've already done toward achieving powerful and believable virtual actors. I will conclude with some thoughts about what a psychologically mature interactive narrative medium might be like, and why it will utterly change our culture.

Bio:
Ken Perlin is a Professor in the Department of Computer Science at New York University. He is the Director of the Media Research Laboratory and the co-Director of the NYU Center for Advanced Technology. His research interests include graphics, animation, and multimedia. In January 2004 he was the featured artist at the Whitney Museum of American Art. In 2002 he received the NYC Mayor's award for excellence in Science and Technology and the Sokol award for outstanding Science faculty at NYU. In 1997 he won an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his noise and turbulence procedural texturing techniques, which are widely used in feature films and television. In 1991 he received a Presidential Young Investigator Award from the National Science Foundation.

Dr. Perlin received his Ph.D. in Computer Science from New York University in 1986, and a B.A. in theoretical mathematics from Harvard University in 1979. He was Head of Software Development at R/GREENBERG Associates in New York, NY from 1984 through 1987. Prior to that, from 1979 to 1984, he was the System Architect for computer generated animation at Mathematical Applications Group, Inc., Elmsford, NY, where the first feature film he worked on was TRON.

Research on IT infrastructure

Date: Thursday October 21, 2004
Time: 11am-12:15
Location: Woodward 149

Cris Pedregal Martin <cris@cs.unm.edu>

In this talk I will present my research towards supporting infrastructure-grade Information Technology. I will discuss current work on three somewhat related areas: recovery support, performance tradeoff shifts, and privacy in upcoming location-aware services. For each area I will outline the state of the art, present challenges, and discuss my current approach.

Two Problems in Computer Vision and Graphics.

Lance Williams <williams@cs.unm.edu>

I will discuss my work on two problems in computer vision and graphics.

Computer vision: A visual computation is Euclidean invariant if every rotation and translation of its input produces an equal rotation and translation of its output. Is it possible for a network built from a finite number of discrete components (such as the human visual cortex) to implement visual computations which are invariant to continuous rotations and translations of its input? I will show that the answer to this problem is yes for an important problem in human visual information processing.

Computer graphics: The power of wavelet transforms derives from their ability to compactly represent discrete periodic functions defined on rectangular domains. We describe a wavelet transform suitable for functions defined on surfaces modeled as meshes. We demonstrate analysis and synthesis of functions of a sphere, and then apply our transform to a problem in computer graphics, namely, synthesizing texture on surfaces without distortion or discontinuity.

Inference of, for and by the Web - Machine Learning Challenges at Google

Date: Tuesday October 19, 2004
Time: 11am-12:15pm
Location: Woodward 149

David Cohn <david.cohn@somerandom.com>
Senior Research Scientist at Google
(joint work with 1000 other Googlers)

Abstract:
The web is one of the largest and most lucrative data sets in the world. It is also, remarkably, free - anyone can access it, and anyone can add to it. These attributes give rise to unique challenges and opportunities for a anyone trying to organize and deliver web-based information.

I will discuss a few such challenges faced by Google, including adversarial information retrieval and spelling correction without a "correct" answer. I'll describe how we're applying machine learning and statistics to solve them, what ongoing challenges we face, and what it's like being in the heart of a company searching terabytes of data to serve over 200 million queries a day.

Bio:
David Cohn is a senior research scientist at Google, where he works on link analysis and machine learning problems related to search quality. He holds a Ph.D. in Computer Science from the University of Washington. Prior to joining Google, David studied as a postdoc at MIT, served as visiting faculty at University of Oregon and CMU, and worked for a variety of startups doing everything from workflow management to digital music recommendation. But not all at the same time.

Algorithms for Intenstiy-Modulated Radiation Therapy in 3D and 4D

Date: Tuesday October 12, 2004
Time: 11am-12:15pm
Location: Woodward 149

Shuang (Sean) Luan <sluan@cs.unm.edu>

Abstract:
Intensity-modulated radiation therapy (IMRT) is a modern cancer treatment technique and aims to deliver a highly conformal dose to a target tumor while sparing the surrounding sensitive structures. A key to performing IMRT is the accurate and efficient delivery of discrete dose distributions using the linear accelerator and the multileaf collimator. The MLC segmentation problems arise in such scenarios, whose goal is to compute a treatment plan that delivers the given dose distributions in the minimum amount of time.

In this talk, I will first present some new MLC segmentation algorithms and software. Our new algorithms, based on a novel unified approach and geometric optimization techniques, are very efficient and guarantee the optimal quality of the output treatment plans. Comparisons between our software with commercial planning systems and the current most well known leaf sequencing algorithm in medical literature showed substantial improvement. The treatment plan computed by our software not only takes much less delivery times (up to 75% less) but also has much better treatment quality. Our software has already been used for treating cancer patients at a couple of medical centers.

During the course of radiotherapy, the patient anatomy and position tend to vary from that have been used for treatment planning. This gives rise to a possibility of insufficient dose coverage to tumors and overdose to the surround sensitive structures, which may eventually lead to treatment failures. So, in the second part of this talk, I will discuss some of the future directions of our research, especially in the area of 4-D IMRT, in which the organ and tumor motions are considered in the treatment planning.

ParoC++: An Object­oriented model for adaptive HPC on the Grid

Date: Thursday October 7, 2004
Time: 11am-12:15pm
Location: Woodward 149

Dr. Pierre Kuonen < Pierre.Kuonen@eif.ch>
University of Applied Sciences of Fribourg (EIA-FR)
Switzerland

Abstract:
Adaptive utilization of resources in a highly heterogeneous computational environment such as the GRID is a difficult question. In this talk we will present an object-oriented approach to the solution using requirement-driven parallel objects. Each parallel object is a self-described, shareable and passive object that resides in a separate memory address space. The allocation of the parallel object is driven by the constraints on the resource on which the object will live. A new parallel programming paradigm is presented in the context of ParoC++ - a new parallel object oriented programming environment for high performance distributed computing. ParoC++ extends C++ for supporting requirement-driven parallel objects and a runtime system that provides services to run ParoC++ programs in distributed environments.

Using Network Tomography to Determine Link Delays in a Tunneled Network

Date: Tuesday October 5, 2004
Time: 11am-12:15pm
Location: Woodward 149

Hal Burch <hburch+@cs.cmu.edu>
School of Computer Science
Carnegie Mellon University

Abstract:
Network tomography is analyzing measurements of the combination of factors to separate out the individual components. An example of network tomography is using end-to-end delay measurements, which are the sum of delays over multiple links, to determining the delay across individual links. This type of network tomography is of interest to network operators, as it obviates the need to deploy measurement systems at all nodes within a network. Previous network tomography techniques require deploying monitors on a large fraction of the nodes of the network, which can be expensive and intrusive on large commercial networks. Moreover, past network tomography algorithms were based on assumptions such as symmetric delays, tree-like topologies, and the ability to send packets to internal nodes.

This talks will present a system to determine link delays within a tunneled network that requires only one measurement host. Using linear programming techniques, our algorithm does not require packets to be sent to internal nodes or assumptions of symmetry or a tree-like topology. This talk will also present the results from the deployment of our system on one of AT&T's networks.

This is joint work with Chris Chase and Albert Greenberg of AT&T Labs.

Computational Challenges from the Tree of Life

Date: Tuesday September 28, 2004
Time: 11am-12:15pm
Location: Woodward 149

Bernard Moret <moret@cs.unm.edu>
Department of Computer Science
University of New Mexico

Abstract:
My research has focused for the last 5 years on computational models and methods for reconstructing evolutionary histories, known as phylogenies, from molecular data. The most ambitious phylogenetic project is the"Assembling the Tree of Life" (ATOL) initiative, which aims at reconstructing the evolutionary history of all organisms on the planet -- an estimated 10 to 100 million species. ATOL gives rise to a computational grand challenge: assuming that biologists collect sufficient data about extant species, how will we compute a reconstruction for 10 million species when current methods have problems handling a few hundred species and scale poorly?

I lead a project named CIPRES, the goal of which is to establish a Cyber Infrastructure for Phylogenetic Research that will support reconstruction of the Tree of Life. In this talk, I will briefly introduce the biological foundations for phylogenetic analysis, then discuss the computational challenges posed by the ATOL initiative, from the design and validation of computationally useful models of evolution to the actual computation and assessment of the Tree of Life itself. Sophisticated mathematics (combinatorics and statistics), good algorithm design (optimization methods, bounding strategies, scalable data structures), and very careful algorithm engineering and testing methodologies are all required for success.

Stable Internet Routing Without Global Coordination

Date: Tuesday September 21, 2004
Time: 11am-12:15pm
Location: Woodward 149

Jennifer Rexford <jrex@research.att.com>
Department of Network Management & Performance
AT&T Research Lab

Abstract:
The Border Gateway Protocol (BGP) allows an autonomous system (AS) to apply diverse local policies for selecting routes and propagating reachability information to other domains. However, BGP permits ASes to have conflicting policies that can lead to routing instability. This talk proposes a set of guidelines for an AS to follow in setting its routing policies, without requiring coordination with other ASes. Our approach exploits the Internet's hierarchical structure and the commercial relationships between ASes to impose a partial order on the set of routes to each destination. The guidelines conform to conventional traffic-engineering practices of ISPs, and provide each AS with significant flexibility in selecting its local policies. Furthermore, the guidelines ensure route convergence even under changes in the topology and routing policies. Drawing on a formal model of BGP, we prove that following our proposed policy guidelines guarantees route convergence. We also describe how our methodology can be applied to new types of relationships between ASes, how to verify the hierarchical AS relationships, and how to realize our policy guidelines. Our approach has significant practical value since it preserves the ability of each AS to apply complex local policies without divulging its BGP configurations to others. The end of the talk briefly summarizes follow-up studies that have built on this work.

The work described in this talk draws on the papers
http://www.research.att.com/~jrex/pape rs/ton01-stable.pdf
http://www.research.att.com/~jrex/papers/i nfocom01.ps

Monitoring and Adaptation for Large-scale Networked Systems

Date: Thursday September 16, 2004
Time: 11am-12:15pm
Location: Woodward 149

Partick G. Bridges <bridges@cs.unm.edu>
Department of Computer Science
University of Nw Mexico

Abstract:
Modern system software must deal with widely varying large-scale environments ranging from ad hoc networks of low-powered sensors to clusters of thousands of high-speed processors. On both ends of this spectrum, system scale presents a wide range of problems to operating system and networking software, and emerging hybrid systems will present an even more challenging environment for system software designers. In this talk, I will describe an potential large-scale system for monitoring and modelling forest fires, describe the research challenges it presents to system software designers, and describe three different networking-related projects that I and my students are working on to address some of these challenges. These projects include a new approach to monitoring the behavior of large-scale networked systems, the design of new network protocol adaptation mechanisms for large-scale networked systems, and a learning-based approaches to network adaptation policies that I am beginning to explore in cooperation with Professors Forrest and Lane.

The Art, Research, Technology, and Science Laboratory
The ARTSLab

Date: Tuesday September 14, 2004
Time: 11am-12:15pm
Location: Woodward 149

Ed Angel <angel@cs.unm.edu>
Departments of Computer Science, Electrical and Computer Engineering, and Media Arts
University of New Mexico

Abstract:
The strengths of state of New Mexico include its technical community, its geography, and its diverse cultural resources. Nevertheless, the state ranks at the bottom of most economic indicators. Over the past few years, many groups both inside and outside the University have embarked on efforts to improve the economic situation in New Mexico by building on our unique combination of strengths.

The ARTSLab is UNM's contribution to the state's ambitious media industries strategic plan. The ARTSLab will involve almost all UNM's schools and colleges in efforts ranging from research to development to creation of new academic programs. Computer Science has a key role to play in this effort in ways that span the spectrum of our discipline. In this talk, I will introduce the elements of the ARTSLab, our educational and industrial collaborators, and our projects involving the multiprojector domed system at the LodeStar Astronomy Center

Introduction to Research Issues in Stochastic Modeling

George Luger <luger@cs.unm.edu>
Department of Computer Science
University of New Mexico

Abstract:
We introduce techniques for stochastic modeling. We describe, briefly, full Bayesian inferencing. We discuss the assumptions supporting Bayesian Belief Networks and the complexity reductions resulting from using this technology. We give a simple example of a BBN and discuss the representational limitations of the BBN approach. We next talk about the creation of a first-order system for building stochastic models. Dan Pless, as part of his PhD research at UNM, first designed and built this system. We are currently, with support from the US Navy, rebuilding these first-order stochastic modeling tools in Java. We end with a simple example, built by Chayan Chakrabarti and Roshan Rommohan, of our program doing diagnostic reasoning. The example uses a variant of the Hidden Markov Modeling technology to predict failures in a helicopter rotor system.

Choosing a Random Peer

Date: Thursday September 9, 2004
Time: 11am-12:15pm
Location: Woodward 149

Jared Saia <saia@cs.unm.edu>
Department of Computer Science
University of New Mexico

Abstract:
We present the first fully distributed algorithm which chooses a peer
uniformly at random from the set of all peers in a distributed hash
table (DHT). Our algorithm has latency $O(\log n)$ and sends $O(\logn)$
messages in expectation for a DHT like Chord. Our motivation for
studying this problem is threefold: to enable data collection by
statistically rigorous sampling methods; to provide support for
randomized, distributed algorithms over peer-to-peer networks; and to
support the creation and maintenance of random links, and thereby
offer a simple means of improving fault-tolerance.

This is joint work with Valerie King which appeared in PODC 2004.

Research Overview

Cris Moore <moore@cs.unm.edu>
Department of Computer Science
University of New Mexico

Abstract:
I will describe several areas of interdisciplinary research at the boundary of physics and computer science. These include models of social networks, phase transitions in search and optimization problems, and quantum computing.

Current Research in Biochemical Computing

Date: Tuesday September 7, 2004
Time: 11am-12:15pm
Location: Woodward 149

Darko Stefanovic <darko@cs.unm.edu>
Department of Computer Science
University of New Mexico

Abstract:
I'll describe ongoing work in the Molecular Computing Group. In
particular, I'll describe a new technology for amorphous
bio-compatible computing, using deoxyribozyme-based logic gates.

Oligonucleotides can act as enzymes, or ribozymes, on other
oligonucleotides, yielding oligonucleotide products. Moreover, these
reactions can be controlled by inputs which are also oligonucleotides.
We interpret these reactions as logic gates, and the concentrations of
chemical species as signals. Since these reactions are homogeneous,
i.e., they use oligonucleotides for both their inputs and their
outputs, we can compose them to construct complex logic circuits.

I'll describe the several kinds of logic gates we have developed, as
well as their initial applications in simple feed-forward circuits,
including arithmetic and game-playing automata. I'll also demonstrate
our open-system designs for a biomolecular realization of elementary
components for a digital computer, including feedback circuits such as
bistable memory elements (flip-flops) and oscillators.

An eventual goal of this technology is the development of in-vivo
computational devices. Thus, our system for chemical computation
offers functionality similar to conventional electronic circuits with
the potential for deployment inside of living cells.

I'll outline some outstanding problems in the design and applications
of reliable dexyribozyme logic.

Advanced Methods for Information Conveyance

Date: Thursday August 26, 2004
Time: 11am- 12:15pm
Location: Woodward 149

David S. Ebert <ebertd@ecn.purdue.edu>
School of Electrical and Computer Engineering
Purdue University

Abstract:
In this talk, I'll describe our research beyond realistic rendering
and visualization to address what we consider to be the primary
purpose of graphics and visualization: effective conveyance of
information to the user. We are developing techniques to enable
graphics and visualization systems to become usable, dependable, and
eventually indispensable tools for artists, animators, scientists,
business managers, and information analysts to analyze and convey
meaning from the underlying data. We are achieving this goal through
new advanced interfaces, interactive image generation, and novel
rendering and visualization techniques that incorporate and extend
techniques from physics, perception, art, and
illustration.

Bio:
David S. Ebert is an Associate Professor in the School of Electrical
and Computer Engineering at Purdue University and director of the
Purdue University Rendering and Perceptualization Lab with over $4M in
active grants. He received his Ph.D. from the Computer and Information
Science Department at The Ohio State University in 1991. His research
interests are scientific, medical, and information visualization,
computer graphics, animation, and procedural techniques. Dr. Ebert
performs research in volume rendering, illustrative visualization,
minimally immersive visualization, realistic rendering, procedural
texturing, modeling, and animation, and modeling natural phenomena.
Ebert has been very active in the graphics community, teaching
courses, presenting papers, chairing the ACM SIGGRAPH 97 Sketches
program, co-chairing the IEEE Visualization '98 and '99 Papers
program, serving on the ACM SIGGRAPH Executive Committee and serving
as Editor-in-Chief for IEEE Transactions on Visualization and Computer
Graphics. Ebert is also editor and co-author of the seminal text on
procedural techniques in computer graphics, Texturing and Modeling: A
Procedural Approach, whose third edition was published in December
2003.

Rewriting Logic Semantics: From Language Specifications to Formal Analysis Tools

Date: Tuesday August 24, 2004
Time: 11am-12:15 pm
Location: Woodward 149

Jose Meseguer <meseguer@cs.uiuc.edu>
University of Illinois at Urbana-Champaign

Abstract:
This talk summarizes joint work with Grigore Rosu, and with several
Ph.D. students working with Rosu and with me at UIUC.
Formal semantic definitions of concurrent languages, when specified in a
well-suited semantic framework and supported by generic and efficient formal
tools, can be the basis of powerful software analysis tools. Such tools can
be obtained for free from the semantic definitions; in our experience in
just the few weeks required to define a language's semantics even for large
languages like Java. By combining, yet distinguishing, both equations and
rules, rewriting logic semantic definitions unify both the semantic equations
of equational semantics (in their higher-order denotational version or their
first-order algebraic counterpart) and the semantic rules of SOS. Several
limitations of both SOS and equational semantics are thus overcome within this
unified framework. By using a high-performance implementation of rewriting
logic such as Maude, a language's formal specification can be automatically
transformed into an efficient interpreter. Furthermore, by using Maude's
breadth first search command, we also obtain for free a semi-decision procedure
for finding failures of safety properties; and by using Maude's LTL model
checker, we obtain, also for free, a decision procedure for LTL properties of
finite-state programs. These possibilities, and the competitive performance
of the analysis tools thus obtained, are illustrated by means of a concurrent
Caml-like language; similar experience with Java (source and JVM) programs is
also summarized.