Date: Wednesday, June 22, 2005
Time: 11:00 a.m.
Place: FEC 141
Professor Neil D. Jones
DIKU (Computer Science Department)
University of Copenhagen
This overview talk describes several forms of self-reference in Computer Science, all having to do with programming languages. The foundations of Computer Science stem from Recursive Function Theory in Mathematics, which as starting postulates three assumptions about any algorithmic language L:
1. L has a self-interpreter (also called a "universal function")
2. L-programs can be specialized (Kleene's S-m-n Theorem, or "partial evaluation")
3. Computational completeness: a function is L-computable if and only if
it is computable by some Turing machine.
Both 1 and 2 are (or can be) self-referential by nature: a self-interpreter may be applied to interpret itself, and a specializer may be used to specialize itself.
- The first leads to the well-known unsolvability of the Halting Problem.
- The second leads to practically usable methods for compiling, given an interpreter,
and even to automatic generation of compilers from interpreters.
Reference: Partial Evaluation and Automatic Program Generation, Jones, Gomard
and Sestoft, 1993.
- A combination of the two leads to Kleene's fascinating Second Recursion
Theorem that, in effect, states that any programming language allows the use
of "reflective" programs that are allowed to refer to their own texts
in order to carry out a computation.
Date: Thursday, April 28, 2005
Time: 11:00 a.m.
Place: Woodward 149
Dr. Stefan Posse
Associate Professor, Psych Research Schizophrenia
University of New Mexico
Recent advances in functional MRI methodology enable monitoring of changes
in brain activation during the ongoing scan in single subjects. This methodology
can be used to assess online data quality and subject performance, and enables
novel stimulation designs with neurofeedback. Key technical developments, including
novel parametric imaging methods (multi-echo EPI), real-time statistical analysis
and pattern classification with boosting, will be presented along with selected
applications using unaveraged single trials.
The second part of the talk will give a brief overview over the development
of high speed Proton-Echo-Planar-Spectroscopic-Imaging (PEPSI) to map brain
biochemicals with high spatial resolution. Combination with parallel imaging
techniques (SENSE/GRAPPA) takes this methodology one step further, dramatically
reducing spatial-spectral encoding time to enable 3D scans in clinically acceptable
measurement times and time resolved 2D mapping of metabolic dynamics.
Date: Tuesday, April 26, 2005
Time: 11:00 a.m.
Place: Woodward 149
Dr. Vince Clark <vclark@unm.edu>
Department of Psychology
University of New Mexico
Imaging the structure and function of the human brain offers the exciting possibility of explaining the physical basis of mental phenomenon that have intrigued us for centuries, such as cognition, emotion and consciousness. This could also lead to improved methods for diagnosing, preventing and treating neurological and mental illness. While we have made many advances towards these goals, there are many problems to overcome. I will give an overview of some of the ongoing neuroimaging projects the MIND Institute, describe some of the problems we are attempting to solve, and what we have learned so far.
Date: Tuesday, April 19, 2005
Time: 11:00 a.m.
Place: Woodward 149
Dr. Calgero Zarba <zarba@cs.unm.edu>
Department of Computer Science
University of New Mexico
The Nelson-Oppen combination method combines decision procedures for theories satisfying certain conditions into a decision procedure for their union. While the method is known to be correct in the setting of unsorted first-order logic, some current implementations of it appear in tools that use a sorted input language. Until 2004, however, there have been no theoretical results on the correctness of the method in a sorted setting, nor is it obvious that the method in fact lifts as is to logics with sorts.
To bridge this gap between the existing theoretical results and the current implementations, we extend the Nelson-Oppen method to (order-)sorted logic and prove it correct under conditions similar to the original ones. From a theoretical point of view, the extension is relevant because it provides a rigorous foundation for the application of the method in a sorted setting.
From a practical point of view, the extension has the considerable added benefits that in a sorted setting the method's preconditions become easier to satisfy in practice, and the method's nondeterminism is generally reduced.
* Based on a joint work with Cesare Tinelli (University of Iowa).
Date: Tuesday, April 5, 2005
Time: 11:00 a.m.
Place: Woodward 149
Dr. Don Hush dhush@lanl.gov
Los Alamos National Labs
This talk provides a brief introduction to support vector machines (SVMs) for machine learning and then describes some recent advances in the development of fast algorithms for solving the quadratic programming (QP) problem associated with the support vector machine (SVM) training process. Since the primal QP problem can be prohibitively large a smaller dual QP problem is solved and its solution mapped to a primal solution. Decomposition algorithms are arguably the most common type of algorithm used to solve the dual in practice. We describe a new class of decomposition algorithms (introduced by Simon) that produce an approximate solution to the dual in O(n2 ln(1/e)) time, where n is the number of data samples and e is the accuracy of the approximation. This is a substantial improvement over the previous best result of O(n5 ln(n)/e) for an algorithm we developed in 2003. We also describe a new O(n ln(n)) algorithm for mapping an approximate dual solution to an approximate primal solution. Finally we compare these new algorithms with algorithms currently used in practice.
"Can We Avoid Catastrophic Failures of Computer Networks?"
Date: Thursday, March 31, 2005
Time: 11:00 a.m.
Place: Woodward 149
Dr. Hermann Maurer <hmaurer@iicm.edu>
Dean, Faculty of Computer Science
Graz University of Technology and
Director, Institute for Hypermedia Systems
JOANNEUM RESEARCH, Graz, Austria
The number of viruses and other computer threatening software is increasing
at alarming speed. Even if we act decisively, (which we don't) the likelihood
of a large scale and long-term failure of all computers and computer networks
is high. Such failure will not be caused by some super-hacker, but rather by
a well-planned cyber-attack. The consequences of a serious failure are catastrophic.
Since our dependency on computers and computer networks is steadily increasing,
consequences will be worse the later such a breakdown occurs! In this talk
we argue why a failure is likely and what it will cause if we do not take precautions
that involve technical, economical and political decisions that are fairly
far-reaching.
Born in Vienna, Austria, Maurer studied mathematics and computer science at
the Universities of Vienna and Calgary, and was Assistant and later Associate
Professor for Computer Science at the University of Calgary 1966-1971. He then
took on various positions as full professor at a number of universities, and
is now at the Graz University of Technology specializing in networked multimedia
systems and their applications to knowledge management, learning, digital libraries,
museums, and societal implications of new developments in computers. As hobby
he is writing a series of Science Fiction novels. Some of his main accomplishments
include: Dean of Faculty of Computer Science with about 200 researchers and
2500 students, head of two research institutes in Graz. Published some 600
papers and 20 books, half of them technical, the most recent on "Learning
Support Systems for Organizational Learning" (2004). The other half of
his books are Science Fiction. He has supervised some 500 M.Sc. and 40 Ph.D.
theses, founded 16 companies and a number of international conferences and
journals. Maurer holds two honorary doctorates, is member of two academies
of science, and has been awarded many other distinctions. He has been the project
leader of some 20 multimillion-dollar projects
NOTE: The first 80 persons attending the talk will get a free copy of the
speaker's book: "The Paranet- The Breakdown of the Internet".
Discovering and Debugging Algebraic Specifications for Java
Classes
Date: Tuesday, March 22, 2005
Time: 11:00 a.m.
Place: Woodward 149
Amer Diwan, University of Colorado at Boulder,
diwan@cs.colorado.edu
http://www.cs.colorado.edu/~diwan
When a programmer uses a class library, well documented interfaces are critical to avoid bugs. Algebraic specifications can document interfaces unambiguously and accurately, and are thus desirable to augment informal documentation. Unfortunately, algebraic specifications are costly to develop.
We present a system for reducing the cost of developing algebraic specifications for Java classes. The system consists of two components: an algebraic specification discovery tool and an algebraic interpreter. The first tool automatically discovers algebraic specifications from Java classes. The tool generates tests and captures the information it observes during their execution as algebraic axioms. In practice, this tool is accurate, but not complete. Still, the discovered specifications are a good starting point for writing a specification. The second component of our system is the algebraic specification interpreter, which helps developers in achieving specification completeness. Given an algebraic specification of a class, the interpreter generates a rapid prototype which can be used within an application just like any regular Java class. When running an application that uses the rapid prototype, the interpreter prints error messages that tell the developer in which way the specification is incomplete.
This is collaborative work with Johannes Henkel and Christoph Reichenbach.
Large Scale Optimization Algorithms for Homeland Security Applications
Date: Thursday, March 10, 2005
Time: 11:00 a.m.
Place: Woodward 149
Dr. Bart G. Van Bloemen Waanders <bartv@sandia.gov>
Optimization and Uncertainty Estimation Department
Sandia National Labs.
Large scale optimization algorithms are demonstrated on a range of important Homeland security applications. In the event of a chemical/biological contamination event, optimization algorithms can be utilized to help reconstruct initial conditions from a sparse set of sensors located throughout the domain. Given correct initial conditions, accurate forward simulations can then be used to help with the mitigation procedures. Inversion problems of this type are ill-conditioned, underdetermined and therefore difficult to solve, especially if inversion parameters exist at every discretized point in the domain. Intrusive all-at-once approaches are utilized to take advantage of the internal linear algebra of the forward simulation and thereby providing the most computationally efficient solution technique. Numerical results for hypothetical contamination events in internal facilities, external flows and water distributions are presented. In addition, optimization algorithms are applied to a decontamination process for a simple internal facility. A brief overview of fundamental optimization methods and implementations will be presented in addition to numerical results that compare and contrast these different optimization methods and implementations.
Preferences and Domination
Date: Thursday, March 3, 2005
Time: 11:00 a.m.
Location: Woodward 149
Judy Goldsmith <goldsmit@cs.uky.e du>
University of Kentucky
The issue of preferences arises in computer science in the context of e-commerce, decision-theoretic planning, and personalization. The computational questions are: How do you elicit a person's preferences, and how do you represent those preferences? Given a preference representation, we then need algorithms to determine whether one instance is preferable to another, to find a most-preferred instance, and to determine whether the described preferences even make sense.
Consider the problem of red vs. green (Hatch New Mexico chilis, of course). There are many features to consider: is the green chili fresh? Was it a rainy year? How hot? (Note that answers to the latter two are strongly correlated.)
Computer Scientist X prefers green to red, given that the red sauce was made with beef stock, and otherwise prefers red to green unless the green chili is from freshly-picked chilis.
One representation of preferences is a CP-net, which uses a digraph to represent dependencies of features ("My preference about color depends on the ingredients and freshness") and conditional preference tables ("If the red is vegetarian and the green is not fresh then I prefer red; If the red is vegetarian and the green is fresh then I prefer the green....")
We say that one instance (vegetarian, fresh, green) is preferred to another (non-veg, not fresh, red) if there is a sequence of single-feature changes that begins with the less-preferred instance and ends with the more-preferred instance, and where each change increases the preferability. We also say that the preferred instance dominates the less-preferred instance.
We have shown that both the dominance problem and the consistency problem are PSPACE complete for cyclic CP-nets. I will discuss the implications of these results and sketch proofs.
This is joint work with Jerome Lang, Mirek Truszczynski, and Nic Wilson.
Proving Termination Automatically By Dependency Pairs
Date: Tuesday, March 1, 2005
Time: 11:00 a.m.
Place: Woodward 149
Juergen Giesl <giesl@informatik.rwth-aachen.de>
RWTH Aachen
Germany
Termination is a fundamental property of programs and the proof of termination
is an important task in program verification. Since verification is usually
very costly, our goal is to automate this task as much as possible.
This talk presents the dependency pair approach, which is one of the most
powerful techniques for automated termination
proofs. After introducing the basic ideas of this approach, we present several
recent improvements which considerably reduce and simplify the constraints
produced for termination proofs. Moreover, we show how the indeterminisms and
search problems of the dependency pair approach can be mechanized efficiently.
We implemented our results in the automated termination prover AProVE and evaluated
them on
large collections of examples.
The talk is based on joint work with Thomas Arts, Rene Thiemann, Pete Schneider-Kamp,
and Stephan Falke.
Intelligent and Agile DNA Molecules
Date: Monday, February 21, 2005
Time: 10:00 a.m.
Location: Woodward 149
Milan N. Stojanovic <mns18@columbia.edu>
Department of Medicine, Columbia University
I will discuss a synthetic approach toward achieving complex behaviors of nucleic
acids in solution and on surfaces. In particular, molecular computation elements,
sensors, and machines will be described, together with their integration into
circuits, automata, and molecular robots.
Biologically inspired approaches to computer security
Date: February 10, 2005
Time: 11:00 - 12:15 p.m.
Location: Woodward 149
Professor Stephanie Forrest <forrest@cs.unm.edu>
Department of Computer Science, UNM
Our software infrastructure confronts a situation increasingly similar to the challenges faced by living organisms in a biological ecosystem. Highly dynamic, complex, and hostile environments are placing new demands on computation. Using biology as an example, we can potentially change how we engineer software infrastructures by using principles such as adaptability, homeostasis, redundancy, and diversity.
The talk will illustrate how biological design principles are providing new insights and approaches in the field of computer security. The talk will emphasize recent results in automated diversity and using epidemiological approaches to understand and control widespread network-based attack
Strip Mining the Sky
Date: February 1, 2005
Time: 11:00 - 12:15 p.m.
Location: Woodward 149
Professor John McGraw <mcgraw@lodestar.org>
Department of Physics and Astronomy, UNM
The universe is a mighty big place. We know a great deal about it, and our understanding of the content, structure, texture and extent of the universe is accelerating, as apparently is the universe itself!
UNMs Department of Physics and Astronomy is engaged in creating a unique, unbiased, and precise survey of the sky that will shed light (quite literally) on many of the fundamental issues of astrophysics, including the population of nearby stars and brown dwarfs, the structure of our Milky Way Galaxy, the nature of lethal activities in the environs of massive black holes in the centers of other galaxies, and the discovery of distant supernova explosions.
The telescope that will acquire our data is unique in that it has no moving parts it does not track the stars and galaxies. Rather, our telescope is fixed to the Earth, and uses charge-coupled devices under computer control to produce a continuous multicolor image of the sky that runs from sunset to sunrise every clear night.
Every night for seven years the telescope will shovel in data from the same long, thin strip of sky, creating a data mine unprecedented in astronomy. Each night the telescope will produce 150 to 400 Gbytes of image data. And therein lies the rub. How do we discover a supernova explosion amongst a Sagan (billions and billions) of stars?
We have some ideas, but quite frankly, were counting upon the ideas, experience and skill of newly-found collaborators who will attend this talk in UNMs CS Department to help us solve our wonderful problems.
Other CS-related problems range through real-time (i.e. one nanosecond precision) detector control, remote observing and observatory control, algorithms, data structures, data bases and data mining. Analysis of the data mine will necessitate application of cutting edge computer science techniques, including, as appropriate, artificial intelligence, adaptive systems, and scientific visualization.
The telescope, its data and data systems will be described, as will the scientific programs that drive its global design. You are cordially invited to attend, and then to dig in.
Computer and Computational Science at Los Alamos National Labs "Where we are and some thoughts for the future"
Date: January 27, 2005
Time: 11:00 - 12:15 p.m.
Location: Woodward 149
Bill Feiereisen <wjf@lanl.gov>
Los Alamos National Laboratories
When one thinks of Los Alamos National Laboratory one doesn't normally think of computers and computational science. One thinks of that place up on the Pajarito Mesa where nuclear weapons were born.
Things have evolved much since then. Los Alamos is a National Security Laboratory. Its major security mission is the maintenance of nuclear weapons, but it now plays a very strong role in the science of threats against national security. What is less well known is the vibrant basic research community in areas such as materials science, global climate change, energy security, socio-technological simulations and biology.
So what does this have to do with computers and computational science? In the modern world, some of these areas "do not lend themselves well to experimentation." Obviously simulation plays a huge role as a tool in modern science, but its use is not as simple as "buy a big computer and run a code." Computer and computational science at LANL largely revolves around foreseeing the future of hardware and software and building toward it.
This talk is meant to give a flavor of that software and hardware work that lies behind the scenes in the supercomputing simulations that occur at LANL. I'll talk about the simulations themselves, some of the algorithm work and then the software computing environments and hardware with which we work.