UNM Computer Science

Colloquia Archive - Spring 2004



Fault-Localization in Distributed Resource Allocation

Date: Thursday April 29, 204
Time: 11am-12:15pm
Location: Woodward 149

Scott Pike <pike@aya.yale.edu>
Department of Computer Science and Engineering
Ohio State University

Abstract:
Ideally, faults should be isolated within small, local neighborhoods of impact. Failure locality quantifies impact as the radius of the largest set of processes disrupted by a given fault. The locality radius of a distributed algorithm demarcates a halo beyond which faults are masked. As such, fault-local algorithms are central to engineering survivable distributed systems, because they protect against cascading and epidemic failures.

My work makes theoretical and practical contributions to fault-localization for the generalized dining philosophers problem, subject to crash failures. The optimal crash locality for synchronous dining is 0, but this metric degrades to 2 for asynchronous dining. My work resolves the apparent complexity gap by constructing the first known dining algorithms to achieve crash locality 1 under partial synchrony.

The software-engineering context of my approach consists in augmenting existing dining algorithms with unreliable failure detection oracles. My extended results characterize optimal locality bounds for every detection oracle in the Chandra-Toueg hierarchy with respect to four practical models of mutual exclusion. Analysis of the resulting lattice identifies the weakest detector for solving each dining problem, and discovers two points of discontinuity indicating unresolved complexity gaps.

Bio:
Scott Pike is a Doctoral Candidate in Computer Science & Engineering at Ohio State University. He received his M.S. in Computer Science from Ohio State in 2000, and his B.A. in Philosophy from Yale University in 1996. His research interests focus on software engineering and distributed computing, and, more concretely, on scalable approaches to building agile, adaptive, and survivable components for distributed systems.

Improving Microprocessor Performance and Energy-efficiency by Exploiting OS-aware Architecture Design

Date: Tuesday April 27, 2004
Time: 11am-12:15pm
Location: Woodward 149

Tao Li <tli3@ece.utexas.edu>
Department of Electrical and Computer Engineering
University of Texas at Austin

Abstract:
The Operating System (OS) which manages both hardware and software resources, constitutes a major component of today's complex systems. Many
modern and emerging workloads (e.g., database, web servers and file/e-mail applications) exercise the OS significantly. However, microprocessor
designs and (performance/power) optimizations have largely been driven by the user-level applications. In this talk, I will present the advantages and benefits of integrating OS component in processor architecture design.

In the first part of my talk, I will show how control flow prediction hardware, which is critically to deliver instruction level parallel (ILP) and pipelining performance on today's highly-speculative and deeply-pipelined machine, can be cost-effectively adapted to significantly improve its speculation accuracy on the exception-driven, intermittent OS execution. In the second part of my talk, I will address the adaptations of processor resources to reduce OS power on today's high-complexity processors, which exploit aggressive hardware design to maximize the performance across a wide range of targeted applications.

Bio:
Tao Li is currently a Ph.D. candidate (in Computer Engineering) at the Electrical and Computer Engineering Department, University of Texas at Austin. His research interests include: computer and system architecture, operating systems, energy-efficient design, modeling, simulation and evaluation of computer systems and hardware system prototyping.

Language Support for Generic Programming

Date: Tuesday April 20, 2004
Time: 11am-12:15pm
Location: Woodward 149

Andrew Lumsdaine <lums@cs.indiana.edu>
Director, Indiana University Pervasive Technology Labs, Open Systems Lab
Associate Professor, Computer Science Department

Abstract:
Many modern programming languages support basic generic programming, sufficient to implement type-safe polymorphic containers. Some languages have moved beyond this basic support to a broader, more powerful interpretation of generic programming, and their extensions have proven valuable in practice, particularly for the development of reusable software libraries. Fundamental to realizing generic algorithms is the notion of abstraction: generic algorithms are specified in terms of abstract properties of types and algorithms, the specification of which we call "concepts". Although many languages today have support for "generics", they do not directly support concepts, making it difficult to fully leverage the potential of generic programming in modern software construction. This talk reports on recent work to better understand concepts. Building on previous work by Kapur and Musser, we provide a language-independent definition for concepts. We show how this definition can be realized in the context of different programming languages and discuss how it could be used to address the limitations of existing languages for generic programming.

Shared Risk at the National Scale

Date: Thursday April 15, 2004
Time: 11 am-12:15pm
Location: Woodward 149

Dan Greer
VP/Chief Scientist
Verdasys
Waltham, Massachusetts

Abstract:
The wonderful thing about a small town is that you know everyone; the terrible thing is that they all know you. The wonderful thing about a national infrastructure is that you are closely connected to everything; the terrible thing is that it is all closely connected to you. At the national scale, what are the shared risks and, hence, the shared solutions? What sort of collective action is desirable? Are there tools and strategies we can borrow from other fields and do we have the time to invent?

Efficient Haplotype Inference on Pedigrees and Haplotype Based Disease Gene Mapping

Date: Tuesday, April 13, 2004
Time: 11am-12:15pm
Location: Woodward 149


Jing Li <jing.li@email.ucr.edu>
Ph.D. Candidate, Department of Computer Science and Engineering, University of California, Riverside

Abstract:
With the completion of the Human Genome Project, an (almost) complete human genomic DNA sequence has become available. An important next step in human genomics is to determine genetic variations among humans and the correlation between genetic variations and phenotypic variations (such as disease status, quantitative traits, etc.). The patterns of human DNA sequence variations can be described by SNP (single nucleotide polymorphism) haplotypes. However, humans are diploid and, in practice, haplotype data cannot be collected directly, especially in large scale sequencing projects (mainly) due to cost considerations. Instead, genotype data are collected routinely in large sequencing projects. Hence, efficient and accurate computational methods and computer programs for the inference of haplotypes from genotypes are highly demanded.

We are interested in the haplotype inference problem on pedigrees and haplotype-based association mapping methods for identifying disease genes. We study haplotype reconstruction under the Mendelian law of inheritance and the minimum recombination principle on pedigree data. We prove that the problem of finding a minimum-recombinant haplotype configuration (MRHC) is in general NP-hard. An iterative algorithm based on blocks of consecutive resolved marker loci (called block-extension) is proposed. It is very efficient and accurate for data sets requiring few recombinants. A polynomial-time exact algorithm for haplotype reconstruction without recombinants is also presented. This algorithm first identifies all the necessary constraints based on the Mendelian law and the zero recombinant assumption, and represents them using a system of linear equations over the cyclic group Z2. All possible feasible haplotype configurations could be obtained by adopting the Gaussian elimination algorithm. For genotypes with missing alleles, we develop an effective integer linear programming (ILP) formulation of the MRHC problem and a branch-and-bound strategy that utilizes a partial order relationship (and some other special relationships) among variables to decide the branching order. When multiple solutions exist, a best haplotype configuration is selected based on a maximum likelihood approach. The ILP algorithm works for any pedigree structures, regardless of the number of recombinants, and effective for any practical size problems. We have implemented the above algorithms in a software package called PedPhase and tested them on simulated data sets as well as on a real data set. The results show that the algorithms perform very well.

Haplotype information is much valuable for disease gene association mapping, which is a very important problem in biomedical research. We also develop a new algorithmic method for haplotype mapping of case-control data based on a density-based clustering algorithm, and propose a new haplotype (dis)similarity measure. The mapping regards haplotype segments as data points in a high dimensional space. Clusters are then identified using a density-based clustering algorithm. Z-score based on the numbers of cases and controls in a cluster can be used as an indicator of the degree of association between the cluster and the disease under study. Preliminary experimental results on an independent simulated data set, and on a real data set with the known disease gene location show that our method could predict gene locations with high accuracy, even when the rate of phenocopies is high.

Biography:
Jing Li currently is a Ph.D. candidate in the Department of Computer Science and Engineering at the University of California - Riverside. He received a B.S. in Statistics from Peking University, Beijing, China in July 1995 and a M.S., in Statistical Genetics, from Creighton University in Aug. 2000. He was a winner of the ACM Student Research Competition in 2003. Jing Li's recent research interest includes Bioinformatics / computational molecular biology, algorithms and statistical genetics. He is particularly interested in developing algorithms for haplotype inference and haplotype-based disease gene mapping.

The Mystery of the Human Cerebellum: What does it do?

Date: Tuesday, April 6, 2004
Time: 11am-12:15pm
Location: Woodward 149

Dr. Nancy Andreasen, <nca@unm.edu>
Director, MIND Institute

Abstract:
For many years the cerebellum has been viewed as a cerebral organ primarily dedicated to coordinating motor activity. During recent years, however, new evidence has emerged that indicates that the cerebellum may also be a key cognitive organ in the brain as well. Several models of cerebellar nonmotor or cognitive learning have been proposed. One model (Keele and Ivry, 1990, 1997) argues that the cerebellum functions as a clock or timekeeper, based primarily on lesion studies that indicate cerebellar injury leads to an impairment in the ability to estimate time intervals or imitate timed rhythm sequences. PET studies recently conducted in Iowa confirm that a variety of timing tasks produce robust activation in the cerebellum, as well as the thalamus and insula. Another model has been proposed by Ito (1997). He argues from cerebellar anatomy and suggests that the cerebellum is composed of large numbers of units that he refers to as microcomplexes. These provide an error-driven adaptive control mechanism. Microcomplexes are subunits within the cerebellum that facilitate its function. They receive dual inputs from mossy and climbing fibers; the climbing fibers detect errors and act to reorganize internal connections, while the mossy fibers "drive" the complex. The microcomplexes function like computer chips or microprocessors-they can be used to perform a vast array of different functions. They are connected to diverse brain regions (e.g., multiple different cortical areas) and therefore can play many diverse roles in brain function, including all types of cognitive processing.
In addition, evidence has also emerged that suggests that cerebellar function is impaired in schizophrenia. Studies of schizophrenia using the tools of functional imaging have found a relatively consistent pattern of abnormalities in distributed brain regions that include the cerebellum. Abnormalities are seen in these studies in both the vermis and in the cerebral hemispheres in patterns that are task-related. Patients with schizophrenia have decreased blood flow in the cerebellum in a broad range of tasks that tap into diverse functional systems of the brain, including memory, attention, social cognition, and emotion. Vermal abnormalities are more frequently noted in tasks that use limbic regions (e.g., studies of emotion), while more lateral neocerebellar regions are abnormal in tasks that use neocortical regions (e.g., memory encoding and retrieval). It is therefore highly plausible that the symptoms and cognitive abnormalities of schizophrenia may arise because of malfunctions in a group of distributed brain regions and that the cerebellum is a key node in this malfunctioning group of regions.

 

Cognitive Constraints in Navigational Design: From Psychology to Software

Date: Thursday, April 1st, 2004
Time:
11am-12:15pm
Location: Woodward 149


Susanne Jul, sjul@umich.edu
Computer Science and Engineering, University of Michigan

Abstract:
Identifying which design constraints -- limitations on what constitutes an acceptable solution to a design problem -- do or do not apply to a particular design situation is key to how quickly and how well the design problem is solved. In this talk, I present a case study that derives a set of design constraints for navigational design from existing empirical evidence surrounding navigational and spatial cognition. (I use "navigation" to mean the task of determining where places and things are, how to get to them and actually getting there, and my focus is on support for human navigation within the environment under design.)


The study yielded a set of design elements that are key to navigational cognition along with a set of design principles describing how manipulations of these elements affect navigational performance. During the talk, I will demonstrate a navigational design for a spatial multiscale environment (Jazz) that is based on the derived principles. I will also present results from user testing of this design that show significant increases in navigational performance, along with significant decreases in effort, on a directed search task.


The immediate value of identifying design elements and constraints upon them lies in their explicit use in design. However, I anticipate that the greater benefit will lie in embedding them in software development tools, such as application frameworks, so that they are used implicitly by both developers and designers (future work).

Optimization of Biosystems: Discrete structures, polytopes and suitable algorithms

Date: Tuesday, March 30th, 2004
Time: 11am-12:15pm
Location: Woodward 149

Stefan Wolfgang Pickl,
Department of Mathematics, Center for Applied Computer Science Cologne
University of Cologne

Current address:
Department of Computer Science
University of New Mexico

Abstract:
This talk will give an introduction into the challenging field of the optimization of biosystems applying discrete structures and suitable algorithms. Many optimization problems can be described and solved with the aid of polytopes exploiting their geometrical and combinatorial structure. The presentation describes two cases where polytopes determine feasible sets (Example 1 - economathematics) and where polytopes are suitable tools ("keys") for optimization techniques (Example 2 - data analysis in the lifesciences). In these fields, a special representation form of polytopes may be used to construct an algorithm which is able to analyze and optimize a nonlinear time-discrete system. The underlying theory of the algorithm bases on the use of polytopes and linear programming techniques such that, successively only the extremal points of the polytope are regarded. Their topological behaviour can be used to get suitable decision criteria. Theoretical results are as well presented as numerical solutions. As an example, the project TEMPI (Technology Emissions Means Process Identification) is discussed.

Processes in KaffeOS: Isolation, Resource Management, and Sharing for Java

Date: Thursday, March 25th, 2004
Time: 11am-12:15pm
Location: Woodward 149

Godmar Back, <gback@stanford.edu>
Department of Computer Science, Stanford University

Abstract:
Single-language runtime systems, such as Java virtual machines, are widely deployed platforms for executing untrusted code. These runtimes provide some of the features that operating systems provide: inter-application memory protection and basic system services. They do not, however, provide the ability to isolate applications from each other, or limit their resource consumption.

In this talk, I will present KaffeOS, a Java runtime system that provides these features. The KaffeOS architecture takes many lessons from operating system design, such as the use of a user/kernel boundary, and employs garbage collection techniques, such as write barriers. It supports the OS abstraction of a process in a Java virtual machine. Each process executes as if it were run in its own virtual machine, including separate garbage collection of its own heap. The difficulty in designing KaffeOS lay in balancing the goals of isolation and resource management against the goal of allowing direct sharing of objects to enhance performance and scalability.

I will present performance results that show that KaffeOS can be used to effectively thwart denial-of-service attacks by untrusted or misbehaving code, and demonstrate the effectiveness of KaffeOS's sharing model. Finally, I will also discuss what I view should be the next steps in making type-safe language runtime systems ready for use in robust and scalable multi-process environments.

Biography:
Godmar Back works as a postdoctoral researcher with Professor Dawson Engler. He received his PhD from the University of Utah in 2002. His research interests lie at the intersection of systems and programming languages. He currently works on MJ, a system for statically checking Java code. Before coming to Stanford, he designed and implemented KaffeOS, a Java runtime system that provides process isolation and resource management for multiple applications in a single JVM. He has also worked on various OS projects, such as the Utah OSKit and the Fluke microkernel.

On the Psychological Status of Linguistic Units

Date: Thursday, March 11th, 2004
Time: 11am-12:15pm
Location: Woodward 149

Jay McClelland, <jlm@cnbc.cmu.edu>
Department of Psychology, Carnegie Mellon University
and Center for the Neural Basis of Cognition Carnegie Mellon and the University of Pittsburgh

Abstract:
Through the invention and proliferation of written language, it has become second nature to view utterances as composed of words, words as composed of morphemes and syllables, and syllables as composed of phonetic segments. In this talk I will argue from the properties of connectionist models and from facts about language that have been pointed out by Bybee and others that it might be best to view all of these units, not as items with psychological reality as units per se, but as contrivances that have proven useful for the construction of a notational system that allows for the approximate transcription of spoken language. The development of explicit theory that eschews all units is a task for the future, a journey I at least am just beginning. I intend to explore in future work how far such a journey may lead us and will use this talk to sketch out a few initial steps in this direction.

New Algorithms and Software for Treatment Planning Problems in Intensity Modulated Radiation Therapy

Date: Tuesday, March 9th, 2004
Time: 11am-12:15pm
Location: Woodward 149

Shuang (Sean) Luan, <sluan@cse.nd.edu>
Department of Computer Science and Engineering
University of Notre Dame

Abstract:
Computer-assisted radiotherapy is an emerging interdisciplinary area that applies the state-of-the-art computing technologies to help the diagnosis, design, optimization, and operation of modern radiation therapy. In this talk, we present some interesting problems and their solutions in this exciting area.

Intensity-modulated radiation therapy (IMRT) is a modern cancer treatment technique, aiming to deliver a highly conformal dose to a target tumor while sparing the surrounding normal tissues and critical structures. A key to performing IMRT is the accurate and efficient delivery of discrete dose distributions using the linear accelerator and the multileaf collimator. The leaf sequencing problems arise in such scenarios, whose goal is to compute a treatment plan that delivers the given dose distributions in the minimum amount of time.

Existing leaf sequencing algorithms, both in commercial planning systems and in medical literature are all heuristics and do not guarantee any good quality of the computed treatment plans, which in many cases result in prolonged delivery time and compromised treatment quality. In this talk, we present some new MLC leaf sequencing algorithms and software. Our new algorithms, based on a novel unified approach and geometric optimization techniques, are very efficient and guarantee the optimal quality of the output treatment plans. Our ideas include formulating the leaf sequencing problems as computing shortest paths in a weighted directed acyclic graph and building the graph by computing optimal bipartite matchings on various geometric objects. Our new IMRT algorithms run very fast on real medical data (in only few minutes). Comparisons between our software with commercial planning systems and the current most well known leaf sequencing algorithm in medical literature showed substantial improvement. The treatment plan computed by our software not only takes much less delivery times (up to 75% less) but also has much better treatment quality. Our software has already been used for treating cancer patients at a couple of medical centers.

 

Overcoming Communication, Distributed Systems, and Simulation Challenges: A Case Study Involving the Protection and Control of the Electric Power Grid Using a Utility Intranet Based on Internet Technology

Date: Tuesday, March 2nd, 2004
Time: 11am-12:15pm
Location: Woodward 149

Ken Hopkinson, <hopkik@cs.cornell.edu>
Department of Computer Science, Cornell University


Abstract:
My thesis research has drawn upon the fields of simulation, networking, and distributed computing to examine the inherent problems and potential solutions in using Internet technology in real-time environments with a particular focus on the electric power grid. As recent events have dramatized, the electric power grid is under increasing strain as demand for energy increases while the existing transmission infrastructure has remained largely constant. This makes the power grid an attractive target for study for a number of reasons. It is a large real-time system with well-established operating requirements. The power grid is also appealing due to the interest that its constituents have shown in augmenting their infrastructure with communication to improve its operation. Recent standards such as the Utility Communication Architecture (UCA) and research efforts such as the use of Wide Area Measurement Systems (WAMS) in the Western United States are just two major examples of the active interest in the electric power industry for adopting a private Utility Intranet based on Internet technology to improve the efficiency and reliability of the power grid. Early effort has assumed that TCP would be the underpinnings of any solutions due to its widespread adoption without regard to the protocol’s performance in real-time situations. This is a problem representative of a larger issue. The affordability and ubiquitous nature of the Internet makes its protocols and equipment an obvious choice for any undertaking involving communication, but Internet technology has not been created with real-time requirements in mind. In my thesis work, I have focused on four main areas to gain a better understanding of the issues involved in the use of Internet technology in real-time applications and their potential solutions:

My talk will discuss an overview of the major results the have resulted from these research initiatives and will present potential directions for future work.

 

Data Analysis and Visualization Environments for Large Scale Simulation

Date: Thursday, February 26th, 2004
Time: 11am-12:15pm
Location: Woodward 149

Constantine "Dino" Pavlakos, <cjpavla@sandia.gov>
Manager, Visualization and Data Department
Sandia National Laboratories

Abstract:
Sandia National Laboratories, together with other DOE laboratories, industry and academic partners, is developing advanced technologies that are setting a standard for high performance data exploration and visualization. This work is driven by requirements for the analysis of very large-scale (multi-terabyte) complex scientific application data and is closely coupled to laboratory efforts to provide scalable computing. The strategy for delivering advanced data analysis and visualization capabilities is based on the themes of scalability, use of commodity-based clustered hardware, accessibility from the office/desktop, ease of use, and advanced interfaces and work environments. While working toward a complete end-to-end system of high performance services that can be accessed from the desktop, the current focus is on delivery of scalable data services and scalable rendering, together with the deployment of high-end infrastructure and facilities.

This talk will present an overview of such activities and related accomplishments at Sandia, together with a discussion of related issues and an architectural characterization of environments that effectively support high performance computing (including a brief description of the environment planned for Sandia's ASCI Red Storm machine).

Higher-Order Transformation and the Distributed Data Problem

Date: Tuesday, February 24th, 2004
Time: 11am-12:15pm
Location: Woodward 149


Victor Winter
, <vwinter@mail.unomaha.edu>
Computer Science Department,University of Nebraska at Omaha

Abstract:
The control mechanisms offered by strategic programming have been successfully used to address a variety of problems relating to confluence and termination. However, the application of strategic programming to problems of increasing complexity has raised another issue, namely how auxiliary data fits within a strategic framework. The distributed data problem characterizes the problem posed by auxiliary data. This problem arises from a discord between the semantic association of terms within a specification and the structural association of terms resulting from the term language
definition.

My research is based on the premise that higher-order rewriting provides a mechanism for dealing with auxiliary data conforming to the tenets of rewriting. In a higher-order framework, the use of auxiliary data is expressed as a rule. Instantiation of such rules can be done using standard (albeit higher-order) mechanisms controlling rule application (e.g., traversal). Typically, a traversal-driven application of a higher-order rule will result in a number of instantiations. If left unstructured, these instantiations can be collectively seen as constituting a rule base whose creation takes place dynamically. However, such rule bases again encounter difficulties with respect to confluence and termination. In order to address this concern the notion of strategy construction is lifted to the higher-order as well. That is, instantiations result in rule bases that are structured to form strategies. Nevertheless, in many cases, simply lifting first-order control mechanisms to the higher-order does not permit the construction of strategies that are sufficiently refined. This difficulty is alleviated though the introduction of the transient combinator. The interplay between transients and more traditional control mechanisms enables a variety of instances of the distributed data problem to be elegantly solved in a higher-order setting. These
ideas are formalized in a higher-order strategic programming language called TL.


Bio

Victor Winter is an assistant professor in the Computer Science department at the University of Nebraska at Omaha. He received his Ph.D. in Computer Science from the University of New Mexico in 1994. From 1995--2001 Dr. Winter worked at Sandia National Laboratories where his research efforts focused on high-assurance software development.

Maximizing the Spread of Influence in a Social Network

Date: Thursday, February 19th, 2004
Time: 11am-12:15pm
Location: Woodward 149

David Kempe, <kempe@cs.washington.edu>
Department of Computer Science and Engineering, University of Washington

Abstract:
A social network - the graph of relationships and interactions within a group of individuals - plays a fundamental role as a medium for the spread of information, ideas, and influence among its members. An idea or innovation will appear - for example, the use of cell phones among college students, the adoption of a new drug within the medical profession, or the rise of a political movement in an unstable society - and it can either die out quickly or make significant inroads into the population.

The resulting collective behavior of individuals in a social network has a long history of study in sociology. Recently, motivated by applications to word-of-mouth marketing, Domingos and Richardson proposed the following optimization problem: allocate a given "advertising" budget so as to maximize the (expected) number of individuals who will have adopted a given product or behavior.

In this talk, we will investigate this question under the mathematical models of influence studied by sociologists. We present and analyze a simple approximation algorithm, and show that it guarantees to reach at least a 1-1/e (roughly 63%) fraction of what the optimal solution can achieve, under many quite general models. In addition, we experimentally validate our algorithm, comparing it to several widely used heuristics on a data set consisting of collaborations among scientists.

(joint work with Jon Kleinberg and Eva Tardos)

 

Dynamic Load Balancing and the Zoltan Toolkit

Date: Tuesday, February 17th, 2004
Time: 11am-12:15pm
Location: Woodward 149

Karen Devine, <kddevin@sandia.gov>
Discrete Algorithms and Math Department
Sandia National Laboratories

Abstract:
Efficient and effective dynamic load balancing is critical in parallel, unstructured and/or adaptive computations. Dynamic load-balancing algorithms
assign work to processors during the course of a computation to reflect changing processor workloads (e.g., due to mesh refinement in adaptive finite element simulations) or changing data locality (e.g., due to deformation in crash simulations). The Zoltan library provides a suite of parallel, dynamic load-balancing algorithms, allowing application developers to easily compare the effectiveness of different strategies within their applications. Zoltan also provides tools commonly needed by parallel applications, including data migration functions, distributed data directories, unstructured communication utilities, matrix ordering, and dynamic memory debugging utilities. Zoltan's object-based interface and data-structure-neutral design make it easy to use Zoltan in a variety of applications. This talk will describe Zoltan's dynamic load-balancing algorithms, design, and usage in applications. Results from applications using Zoltan will also be presented.

 

Trends and Challenges for Wireless Embedded DSP's

Date: Tuesday, February 10th, 2004
Time: 11am-12:15pm
Location: Woodward 149

Prof. Lawrence Clark, <ltclark@ece.unm.edu>
Electrical and Computer Engineering, UNM


Abstract:
Programmable digital signal processors (DSP) are key enablers of rapidly expanding wireless markets. The primary handset market, subject to strict power constraints, also creates substantial process scaling barriers. Nonetheless, DSP's in particular and hand-held IC's in general, are uniquely situated to utilize emerging circuit and SOC design techniques to their advantage, allowing more aggressive processes despite power barriers. This talk discusses some of these challenges and design opportunities.

System Support for the Integrated Management of Quality of Service

Date: Thursday, March 4th, 2004
Time: 11am-12:15pm
Location: Woodward 149

Christian Poellabauer, <chris@cc.gatech.edu>
Department of Computer Science, Georgia Institute of Technology


Abstract:
The increasing number of network-enabled systems and the growing complexity of distributed applications pose numerous challenges for the management and provision of Quality of Service (QoS). Particularly in resource-scarce environments, such as mobile wireless systems, adaptation of applications and system-level resource management are used to provide users with the performance and qualities they need. The distributed management of Quality of Service has been the focus of intensive research efforts and has led to a multitude of techniques at the hardware, network, system, or application layer. However, if multiple such techniques are deployed in a system, an integrated approach to QoS management has to be chosen, to ensure optimal results and to prevent adverse effects resulting from competing techniques.


In this talk, I will address how the adaptation of applications and the management of multiple system constraints can be coordinated to efficiently provide users with the QoS they need. This coordination is supported by Q-Fabric, a collection of operating system extensions, which provide the framework to deploy feedback-based integrated QoS management for distributed applications. Specifically, my talk will focus on the integrated approach to distributed energy management for mobile multimedia applications. The goal is to efficiently deploy and coordinate novel energy
management techniques at different layers of a system, while carefully balancing the user-perceived QoS with the system's energy consumption.


SHORT BIO:
Christian Poellabauer is a Ph.D. candidate in Computer Science at the Georgia Institute of Technology, expecting to graduate in May 2004. He received his Master's degree in Computer Science from the Vienna University of Technology. His research interests are in the area of experimental systems, including real-time systems, operating systems, pervasive computing, and mobile systems. He is a recipient of an IBM Ph.D. Research Fellowship.

An Introduction to Technical Writing

Date: Thursday, January 29th, 2004
Time: 11am-12:15pm
Location: Woodward 149


Patrick Bridges, <bridges@cs.unm.edu>
Department of Computer Science, UNM

Abstract:
Effective technical writing is an essential skill for every graduate student and virtually every student has had at least one class as an undergraduate on writing essays of various sorts. Unfortunately, these classes generally teach very little that is of practical relevance to writing technical papers and reports. In this colloquium, I will give students a brief introduction to the issues of technical writing and how they differ dramatically from the essay writing most students learned as undergraduates. In particular, I will focus on practical tips for organizing technical papers and presenting technical information, give example of well-organized and poorly-organized techncal writing, and also give some advice on effective language use in technical writing. In addition, I will give a few tips on presenting technical work to large groups, another essential skill that is frequently not taught directly.

Professional Ethics in the University Setting

Date: Tuesday, January 27th, 2004
Time: 11am-12:15pm
Location: Woodward 149

Bernard Moret, <moret@cs.unm.edu>
Department of Computer Science, UNM

Abstract:
Graduate students are engaged in both learning and research. In each area, they have a responsibility to themselves and to their peers -- a responsibility to conduct themselves in a professional manner according to accepted principles of ethics. We will review the guiding principles in the university setting, their reason for existence, their consequences in terms of daily behavior in the classroom, in the laboratory, and in the scholarly community in general. We will also briefly review some of the many documents that the University provides in this area for students and faculty that can help guide their behavior in teaching and research.