Date: Tuesday, April 10th, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Marcus Magnor
Braunschweig University of Technology, Germany
Abstract:
Ground-breaking advances in science and engineering are frequently triggered by either novel technologies becoming available, new application-driven demands, or progress in related disciplines. In visual computing, all three driving forces are currently coming together: imaging technology as well as graphics hardware are progressing at breath-taking pace, increasing hardware capabilities in conjunction with mass-market proliferation create ceaseless demand for novel applications, while at the same time cognitive neuroscience is beginning to provide quantitative models of visual perception that for the first time allow to computationally emulate how we see. The field of visual computing can be expected to continue to experience a tremendous increase not only in scientific recognition but also in economic relevance. In my talk, I will present examples of research in visual computing to convince you of the scientific as well as economic prospects of visual computing.
Bio:
Marcus Magnor heads the Computer Graphics Lab at Braunschweig University of Technology, Germany. He received his BA (1995) and MS (1997) in Physics from the University of Wuerzburg and the University of New Mexico, respectively, and his PhD (2000) in Electrical Engineering from the University of Erlangen, Germany. For his post-graduate studies, he first joined the Computer Graphics Lab at Stanford University before establishing his own Independent Research Group focusing on "Graphics-Optics-Vision" at the Max-Planck-Institut Informatik in Saarbruecken, Germany. His research interests meander along the visual information processing pipeline, from image formation, acquisition, and analysis to image synthesis, display, perception, and cognition. Recent and ongoing research topics include video-based rendering, 3D-TV, augmented vision, video editing, optical phenomena, as well as astrophysical visualization
Date: Tuesday, April 1st, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Paul R. Cohen
Information Sciences Institute University of Southern California
Abstract:
The first part of this talk is about a project to have softbots and robots learn the meanings of words. Our goal is to mimic the context in which children learn word meanings, where the learner communicates about what's going on in the immediate environment with a facilitative, forgiving, capable language user. I will describe recent results from Wubble World, a game environment in which kids and their softbots communicate in English, and the softbots gradually learn word meanings. The second part of the talk is about K12 education and a project we call the internet classroom. I will quickly survey opportunities afforded by the internet classroom for research in many areas of computer science, then focus on the problem of selecting the next task or learning activity in a way that is customized to each student.
Bio:
Paul Cohen attended UCSD as an undergraduate and Stanford for his PhD in Computer Science and Psychology. He was a professor at the University of Massachusetts from 1983 - 2003, when he joined the Information Sciences Institute at the University of Southern California. At ISI, he serves as director of the Center for Research on Unexpected Events and as deputy director of the Intelligent Systems Division. At Stanford, Cohen edited the Handbook of Artificial Intelligence with Avron Barr and Edward A. Feigenbaum (extending to four volumes, eventually) which probably explains why he tries hard, though with mixed success, to understand how AI and other cognitive sciences are co-developing. Cohen is especially interested in challenge problems and research methods, and he wrote a book and many articles, and advises government agencies, on these subjects. His research develops new methods for learning, planning, and data mining, and applies them to modeling cognitive development, intelligence analysis, various military problems, and education. Cohen is a fellow of the American Association for Artificial Intelligence.
Date: Tuesday, March 25th, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Rica Gonen
Yahoo! Research
Abstract:
The talk will focus on two recent papers in sponsored search auctions. The first presents a truthful sponsored search auction based on an incentive-compatible multi-armed bandit mechanism. The mechanism described combines several desirable traits.
The mechanism gives advertisers the incentive to report their true bid, learns the click-through rate for advertisements, allows for slots with different quality, and loses the minimum welfare during the sampling process.
The underlying generalization of the multi-armed bandit mechanism addresses the interplay between exploration and exploitation in an online setting that is truthful in high probability while allowing for slots of different quality.
As the mechanism progresses the algorithm more closely approximates the hidden variables (click-though rates) in order to allocate advertising slots to the best advertisements. The resulting mechanism obtains the optimal welfare apart from a tightly bounded loss of welfare caused by the bandit sampling process.
The second paper extends the model of the first paper by allowing for time constrained and budget constrained advertisers who arrive and depart in an online fashion. The paper presents an online sponsored search auction that motivates advertisers to report their true budget, arrival time, departure time, and value per click.
In tackling the problem of truthful budget, arrival and departure times, it turns out that it is not possible to achieve truthfulness in the classical sense. As such, we define a new concept called delta-gain.
delta-gain bounds the utility a player can gain by lying as opposed to his utility when telling the truth. We argue that for many practical applications if the delta-gain is small, then players will not invest time and effort in making strategic choices but will truthtell as a default strategy. These concepts capture the essence of dominant strategy mechanisms as they lead the advertiser to choose truthtelling over other strategies.
In order to achieve delta-gain truthful mechanism this paper also presents a new payment scheme, for an online budget-constrained auction mechanism. The payment scheme is a generalization of the VCG principles for an online scheduling environment with budgeted players.
Using the concepts of delta-gain truthful we present the only known budget-constrained sponsored search auction with truthful guarantees on budget, arrivals, departures, and valuations. Previous works that deal with advertiser budgets only deal with the non-strategic case.
Research Areas:
My research focus is mechanism design or essentially any topic that captures the border between computer science theory, game theory and microeconomic theory. Among the topics I work on are: Combinatorial Auctions & Markets, Sponsored Search Mechanisms, Social Networks, Coalitions, Information Markets, Rational Cryptography, RationalDistributed Computation, Online Algorithms and Computation, and Approximation Algorithms.
Date: Thursday, March 27th, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Shripad Thite
Center for the Mathematics of Information
Caltech
The Frechet distance between two curves in the plane is the minimum length of a leash that allows a dog and its owner to walk along their respective curves, from one end to the other, without backtracking. We propose a natural extension of Frechet distance to more general metric spaces, which requires the leash itself to move continuously over time. For example, for curves in the punctured plane, the leash cannot pass through or jump over point obstacles ('trees'). Thus, we introduce the homotopic Frechet distance between two curves embedded in a general metric space. We describe a polynomial-time algorithm to compute the homotopic Frechet distance between two given polygonal curves in the plane minus a given set of obstacles, which are either points or polygons.
This is joint work with Erin Wolf Chambers, Eric Colin de Verdiere, Jeff Erickson, Sylvain Lazard, and Francis Lazarus, to appear at SoCG'08 and invited to a CGTA special issue.
Date: Tuesday, March 11th, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Shamsi T. Iqbal
PhD Candidate
Department of Computer Science
University of Illinois at Urbana-Champaign
Abstract:
Interruptions in the workplace are becoming increasingly prevalent due to the proliferation of proactive behavior within communication applications and collaborative practices. Research has shown that interruptions at inopportune moments often result in substantial costs to users and their tasks, e.g. frustration and reduced productivity. However, information conveyed by notifications is also often beneficial to users. A current thrust within the HCI community has been to develop solutions that reduce the cost of interruption caused by notifications while maintaining their utility.
In this talk I will present my research on developing and evaluating a new solution to the problem of notification management - mediating notifications to be delivered at breakpoints during user tasks. I will first present empirical results from a study that applies theories of memory and action to understand why breakpoints have lower interruption costs. I will describe new techniques derived from theories of event perception to detect breakpoints with varying interruption costs during interactive tasks. I will then present OASIS, a system for scheduling notification delivery at moments it detects as breakpoints. OASIS allows effects of notification scheduling to be studied in practical settings and provides a test bed for experimenting with various scheduling policies. Finally, I will discuss empirical results demonstrating the utility of the system in context of authentic tasks and discuss its impacts on productivity and user affect. This work provides the first empirical evidence that intelligent notification management benefits the end user and contributes new lessons for designing effective notification management systems.
Bio:
Shamsi T. Iqbal is a Ph.D. candidate in the Department of Computer Science at the University of Illinois at Urbana-Champaign, with a focus in the area of Human-Computer Interaction. She received an M.S. in Computer Science from the University of Illinois in 2004 and a B.S. in Computer Science and Engineering from Bangladesh University of Engineering and Technology in 2001. Her dissertation work focuses on developing and evaluating computational systems for intelligently managing notifications in the desktop. Her broader research interests are in attention management, development of models of user activity, physiological measures as evaluation metrics and educational technology. Shamsi's work on interruption management has been featured in the media, e.g. the New York Times and the American Way magazine in 2007. She has served on program committees of several leading conferences in the field of human-computer interaction, including ACM UIST (Poster committee, 2007) and ACM CHI (Notes committee, 2008).
Date: Thursday, March 6th, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Thomas Hayes
Research Assistant Professor
TTI Chicago
Consider the following model for sequential decision making in the presence of uncertainty. In each of T rounds, a decision maker must choose an action, which results in some amount of reward (or loss). This process is online: the rewards are initially unknown, but after each action, the decision maker gets the reward for that action as feedback, hopefully enabling better decisions at later times. A key feature of this problem is the need for carefully balancing Exploitation (i.e. making what seems like the best action) versus Exploration (trying an action just to see what happens).
We will focus on the case when the rewards may be viewed as (unknown) linear functions of the action space. This framework includes the classical "K-armed bandit" problem (what is the best way to play K slot machines, or "one-armed bandits"?), but is much more general. In particular, the action space may be much much larger than the dimension of the linear space it is embedded in (for the K-armed bandit, the cardinality of the action space equals the dimension). Traditional algorithms for online linear optimization, based on reduction to the K-armed bandit, suffer performance penalties proportional to the number of actions. In many practical settings, this number is too large by an exponential amount.
I will present some new algorithms, based on techniques from statistical regression, with provably near-optimal reward guarantees.
Date: Tuesday, March 4th, 2008
Time: 11 am — 12:15 pm
Place: ME 218
The emerging participatory networked embedded systems, designed for aggregated information collection with fine-granularity, are viewed as new generation of pervasive computing systems. We expect both social and economic impact of participatory networks. Applications of participatory networks are likely to deal with highly sensitive or private information. Hence, one of the pressing concerns of users is the confidentiality of the data collected about them. Therefore, we need a way to collect aggregated information while at the same time preserve data privacy.
In this talk, I will present two privacy-preserving data aggregation schemes for additive aggregation functions. The first scheme, Cluster-based Private Data Aggregation (CPDA), leverages clustering protocol and algebraic properties of polynomials. It has the advantage of incurring less communication overhead. The second scheme, Slice-Mix-AggRegaTe (SMART), builds on slicing techniques and the associative property of addition. It has the advantage of incurring less computation overhead. The goal of this work is to bridge the gap between collaborative data collection and privacy preservation of individual data. I assess the two schemes by privacy-preservation efficacy, communication overhead, and data aggregation accuracy. Since both schemes trade message overhead for privacy, I will propose efficiency enhancement method for privacy preserving data aggregation when message overhead is a big concern. Finally, I will conclude this talk by discussing my future plan.
Bio:
Wenbo He is a Ph.D. candidate in Department of Computer Science, at University of Illinois at Urbana-Champaign, where she is supervised by Professor Klara Nahrstedt. Wenbo's research focuses on pervasive computing, security and privacy issues in networked embedded systems. Wenbo received the Mavis Memorial Fund Scholarship Award from College of Engineering of UIUC in 2006, and the C. W. Gear Outstanding Graduate Award in 2007 from the Department of Computer Science at UIUC. She is also a recipient of Vodafone Fellowship from 2005 to 2008, and NSF TRUST Fellowship in 2007. Wenbo got her M.S. from Department of Electrical and Computer Engineering at UIUC and M.Eng. from Department of Automation at Tsinghua University, Beijing, China, in 2000 and 1998 respectively. She received her B.S. from Department of Automation at Harbin Engineering University, Harbin, China in 1995. During August 2000 to January 2005, Wenbo was a software engineer in Cisco Systems, Inc.
Date: Thursday, February 28, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Stephen Chong
PhD Candidate
Cornell University
Abstract:
In this talk, I'll present two recent projects that make programming with strong information security more practical: a new way of writing secure web applications, and a framework for expressing and enforcing an application's security requirements.
Swift is a new way to write secure, efficient web applications. Application code is written as Java-like code, annotated with security policies. Using these policies, Swift partitions the application into JavaScript code to run on the client, and Java code to run on the server. Code and data are placed to ensure that the specified policies are obeyed, and also to provide good interactive performance. Security critical code and data are always placed on the server. Swift makes it easier to write secure web applications: the programmer does not need to worry about the secure or efficient placement of code and data.
Declassification occurs when the confidentiality of information is weakened, for example, allowing more people to read. Erasure is the opposite, and occurs when confidentiality is strengthened, for example, allowing fewer people to read, perhaps removing the information from the system entirely. We have designed a policy framework to express, and provable enforce, applications' declassification and erasure requirements. We have used the policies in the implementation of a secure remote voting service, giving increased assurance that the voting service satisfies its information security requirements.
Bio:
Stephen Chong is a Ph.D. candidate at Cornell University, in Ithaca, NY, where he is advised by Andrew Myers. Steve's research focuses on language-based security and programming languages. He received a bachelor's degree from Victoria University of Wellington, New Zealand, and plans to complete his doctorate by May 2008.
Date: Tuesday, February 26, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Dorian Arnold
Computer Sciences Department
University of Wisconsin-Madison
Abstract:
HPC systems continue to grow in size and complexity making the development of scalable software systems increasingly difficult. As a result, very few tools and applications run effectively or at all at today's largest scales (tens and hundreds of thousands of processors). To make matters worse, million processor systems are scheduled for availability within the next two to four years.
Tree-based Overlay Networks (TBONs) have proven to be an effective computational model for scalable distributed tools and applications. A TBON is a network of hierarchically organized processes that exploits the logarithmic scaling properties of trees to provide scalable data multicast, gather, and in-network aggregation. In this talk, I will describe the TBON model, demonstrating its power and flexibility with unprecedented scalability results from a variety of application domains. I also will describe our novel TBON failure recovery model, state compensation, which relies on inherent information redundancies amongst TBON processes. State compensation features fast, decentralized tree reconstruction and state recovery protocols involving a small subset of the tree and no process coordination. The protocols are scalable because their performance is a function of the tree's fan-out, not total size. A tree with a fan-out of 64 recovers from failures in milliseconds: with only four levels, such a tree supports over a 16,000,000 processes!
Bio:
Dorian Arnold is a doctoral candidate and Intel Foundation Ph.D. fellow in the Computer Sciences Department at the University of Wisconsin. He holds a M.S. degree in Computer Science from the University of Tennessee and a B.S. degree in Mathematics and Computer Science from Regis University (Denver, CO). From 1999 to 2001, Dorian served as technical lead of the NetSolve project at the University of Tennessee's Innovative Computing Laboratory. In 2006, Dorian was a technical scholar at Lawrence Livermore National Laboratory. His research focuses on the performance and scalability issues of large distributed systems including efficient communication and runtime data analysis, fault-tolerance, and system deployment.
Date: Thursday, February 21, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Gary Wassermann
Ph.D. candidate
UC Davis
Abstract:
Web applications enable much of today's online business including banking, shopping, university admissions, and various governmental activities. Anyone with a web browser can access them, and the data they manage typically has significant value both to the users and to the service providers. Cross-site scripting (XSS) and SQL injection are classes of attacks in which an attacker interacts with a client or database, respectively, through vulnerabilities in the server thereby gaining the trust level of the server. These classes of attacks are pervasive: since 2005, they have been the most frequently reported classes of vulnerabilities. These vulnerabilities arise because web applications' layers (client, server, and database) communicate via unstructured strings, and validating untrusted input for use in these commands is error-prone and introduces a challenging software engineering problem.
In this talk, I will present a general characterization of these classes of input validation-based errors and a set of dynamic and static techniques to detect and prevent XSS and SQL injection attacks. Programmers usually do not specify their intentions explicitly regarding SQL query construction, but I will show how we can use principled techniques to characterize programmer intentions. We can then prevent attack queries from being sent to the database with a low-overhead, runtime check that precisely distinguishes legitimate queries from attacks. In order to help find bugs early in the software development process, I also pursued static analysis, and I will describe a sound and precise analysis that scales to large, real-world web applications and found known and unknown SQL injection vulnerabilities. I will further present how we extended this static analysis to the related but more difficult problem of XSS. I will conclude this talk by discussing future challenges in this domain.
Bio:
Gary Wassermann is a Ph.D. candidate in Computer Science at UC Davis, where he specializes in software engineering and programming languages. His current research focuses on software reliability and security. He received his B.S. in Computer Science also from UC Davis. Gary is a recipient of the GAANN fellowship.
Date: Tuesday, February 19, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Soumya Ray
Postdoctoral Researcher
Oregon State University
Abstract:
Humans are remarkably good at using knowledge acquired while solving past problems to efficiently solve novel, related problems. How can we build artificial agents with similar capabilities? In this talk, I focus on "reinforcement learning" (RL)—a setting where an agent must make a sequence of decisions to reach a goal, with intermittent feedback from the environment about the cost of its current decision. I describe an approach that allows agents to leverage experience gained from solving prior RL tasks. To do this, the agent learns a hierarchical Bayesian model from previously solved RL tasks and uses it to quickly infer the characteristics of a novel RL task. I present empirical evidence on navigation problems and tactical battle scenarios in a real-time strategy game, Wargus, that show that leveraging experience from prior tasks improves the rate of convergence to a solution in a new task.
Bio:
Soumya Ray obtained his baccalaureate degree from the Indian Institute of Technology, Kharagpur, and his doctorate from the University of Wisconsin, Madison in 2005. Since 2006, he has been a postdoctoral researcher in the machine learning group at Oregon State University. His research interests are in statistical machine learning, reinforcement learning and planning, and bioinformatics.
Date: Thursday, February 14, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Betty Mohler
Post-doctoral Researcher
Max Planck Institute for Biological Cybernetics
Abstract:
Virtual environments (VEs) are computer-simulations of real or fictional environments with which users can interact. Potential applications of VEs include training, visualization, entertainment, design, rehabilitation, education, and research. The ultimate goal of VEs is to provide the full sensory experience of being in a simulated world. VEs are a very powerful tool to answer scientific research questions from many disciplines. In this talk, four specific projects will be described which use virtual environments to investigate human behavior and also provide suggestions to improve upon the current technology used for VEs. First, an empirical study of space perception within immersive VEs will be presented. Second, results which demonstrate the visual influence on human locomotor behavior will be discussed. Third, a project that analyzes the illusion of self-motion within a large-screen projection VE will be presented. Finally, the implementation of fully articulated full-body avatars within a VE in real time will be described. My research vision is to use virtual environments as a multi-disciplinary tool to provide scientists from various research backgrounds with a rich collection of data on human behavior and interaction. My goal as a scientist has always been to simultaneously investigate human perception while gaining insight from the user in order to improve virtual environments hardware and applications.
Bio:
Betty Mohler received her PhD in computer science from the University of Utah in 2007. She is now in her second year of a post-doctoral research position at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. Her main research interest is in understanding the human observer towards the aim of improving virtual environment applications. Her approach has always been a multi-disciplinary one and, therefore, she currently collaborates with engineers, neuroscientists and psychologists.
Date: Tuesday, February 12, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Sunghun Kim
Postdoctoral Associate
MIT
Abstract:
Almost all software contains undiscovered bugs, ones that have not yet been exposed by testing or by users. What is the location of these bugs? This talk presents two approaches for predicting the location of bugs by analyzing software history. First, the bug cache contains 10% of the files in a software project. Through an analysis of the software's development history and the location of bugs, files are added and removed from the cache based on four bug localities: temporal, spatial, changed-entity, and new-entity locality. After processing, files in the bug cache contain 73-95% of undiscovered bugs. Second, to further improve the localization of predicted bugs, automatic change classification uses information from the configuration management commit transactions. Using machine learning techniques (Bayes Net, Support Vector Machines), we classify commits as being likely to have a fault, or unlikely to have a fault. The best precision and recall figures for each project are typically in the mid-70's. Hence, it is possible for a configuration management system to inform a developer, post-commit, that they have just created a bug (with approximately 94% likelihood).
Bio:
Sunghun Kim is a postdoctoral associate at MIT and a member of the Program Analysis Group. He completed his Ph.D. in the Computer Science Department at the University of California, Santa Cruz in 2006. He was a Chief Technical Officer (CTO), and led a 25-person team at the Nara Vision Co. Ltd, a leading Internet software company in Korea for six years. His core research area is Software Engineering, focusing on software evolution, program analysis, and empirical studies.
Date: Tuesday, January 29, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Lee Ward
Sandia National Laboratories
Abstract:
The dominant paradigm for I/O interfaces and implementation in the supercomputing world is based on the POSIX standards. While this is well known to be sub-optimal, many acceptable mitigation strategies and alternative approaches have not yet been investigated. This talk will give a quick overview of a few current, popular, representative, system I/O architectures, application I/O strategies, and I/O middleware. Then, we will examine and discuss a consensus-based list of open problems in the field that is used by various U.S. government agencies, such as DOE and NSF, to motivate research proposals.
Bio:
Lee Ward is a principal member of technical staff at Sandia National Laboratories. As an inveterate student of operating systems and file systems, his interests have provided the opportunity to make contributions in high performance, parallel file systems, IO libraries, hierarchical storage management, and compute cluster integration/management systems.
Date: Thursday, January 24th, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Bill Smart
Department of Computer Science and Engineering
Washington University in St. Louis.
Abstract:
Learning to control high-dimensional, non-linear dynamical systems is hard, in part because of the Curse of Dimensionality. The volume of the state space increases exponentially with the number of state variables used to describe the system. Learning a controller over this space often requires an exponential amount of training data, limiting us to relatively low-dimensional systems.
Many robotic systems, however, do not inhabit the entire volume of the state space. In fact, any system with a periodic gait lives on a 1-dimensional manifold embedded in the full state space of the system. In this talk, we introduce Shaped Manifold Control (SMC), which simultaneously estimates the manifold over which the system operates and learns an effective controller over this manifold. SMC sidesteps the curse of dimensionality because is learns over a 1-dimensional manifold, regardless of the dimensionality of the full state space. We have successfully applied SMC to a number of simulated high-dimensional continuous dynamical systems, including swimming and walking robots, and will demonstrate the resulting learned controllers.
Bio:
Bill Smart holds a B.Sc. (Hons) in computer science from the University of Dundee (1991), an M.Sc. in Intelligent robotics from the University of Edinburgh (1992), and both an M.S. (1996) and Ph.D. (2002) in computer science from Brown University. He is currently an assistant professor of computer science and engineering at Washington University in St. Louis. He co-directs the Media and Machines Laboratory, a multi-disciplinary laboratory performing research in the areas of robotics, computer vision, machine learning, and computer graphics. His current research focuses on learning robust controllers for high-dimensional robot systems, human-robot interaction, and direct brain-computer interfaces.
Date: Tuesday, January 22, 2008
Time: 11 am — 12:15 pm
Place: ME 218
Maggie Werner-Washburne
Biology Department
University of New Mexico
Abstract:
Modern Biology is being revolutionized by genomic approaches, enabled by knowing the entire DNA sequence of an organism. I will talk about types of data, CS approaches, and how we have collaborated with computer scientists to understand more about quiescent yeast cells.
Bio:
Maggie Werner-Washburne received a bachelor's degree in English from Stanford in 1971, master's degree from the University of Hawaii in Botany in 1979, and a Ph.D. in Botany with a minor in Biochemistry from the University of Wisconsin in 1984. Her post-doctoral work was in yeast molecular genetics at the University of Wisconsin, where she discovered that heat shock proteins worked as chaperones. She is currently a Professor in the Department of Biology at UNM. Dr. Werner-Washburne has funded continuously since coming to UNM by NSF and NIH and is currently PI of several grants, including the NIH-funded IMSD program, which supports undergraduate and graduate research around campus. She has received a Presidential Young Investigator Award, a Presidential Award for Math, Engineering, and Science Mentoring, and is an AAAS Fellow. Dr. Werner-Washburne's work has been cited more than 3500 times in the literature. The focus of her research is genomic analysis of quiescence in yeast and development of new CS and statistical approaches for the evaluation, mining, and integration of different data types in genomics and proteomics.