Colloquia
Google Calendar of Colloquia
Future Colloquia (Tentative Schedule)
Info on Colloquia Requirements for Students
For students taking the colloquia course, here is some information on course requirements.
Date: Thursday, April 4, 2013
Time: 11:00 am — 12:30 pm
Place: Mechanical Engineering 218
Subramani Mani
Assistant Professor
Department of Biomedical Informatics
Vanderbilt University
In this talk we will introduce causal Bayesian networks (CBN) and provide a
working definition of causality. After a short survey of methods for learning
CBNs from data we will discuss two causal discovery algorithms: the Bayesian
local causal discovery algorithm (BLCD) and the post processing Y-structure
algorithm (PPYA). We will present results from five simulated data sets and one
real world population based data set in the medical domain. We will conclude
with some potential applications in Biomedicine and research directions for the
future.
Bio:
Subramani Mani
trained as a physician and completed his residency training in internal
medicine (1990) and a research fellowship in Cardiology from the Medical
College of Trivandrum, India. He then obtained a Master's degree in Computer
Science from the University of South Carolina, Columbia in 1994 and worked as a
post-graduate researcher in the Department of Information and Computer Science
at the University of California, Irvine. He completed his Ph.D in Intelligent
Systems with a Biomedical informatics track from the University of Pittsburgh
in 2005. He joined as an Assistant professor in the Department of Biomedical
informatics in 2006 and was Director of the Discovery Systems Lab there before
moving to the Translational Informatics Division in the Department of Internal
Medicine as Associate professor in the Fall of 2012.
His research interests are data mining, machine learning, predictive modeling
and knowledge discovery with a focus on discovering cause and effect
relationships from observational data.
Date: Tuesday, April 2, 2013
Time: 11:00 am — 12:30 pm
Place: Mechanical Engineering 218
Cristopher Moore
Santa Fe Institute
There is more network data becoming available than humans can analyze by hand or
eye. At the same time, much of this data is partial or noisy: nodes have
attributes like demographics, location, and content that are partly known and
partly hidden, many links are missing, and so on. How can we discover the
important structures in a network, and use these structures to make good guesses
about missing information? I will present a Bayesian approach based on
generative models, powered by techniques from machine learning and statistical
physics, with examples from food webs, word networks, and networks of documents.
Along the way, we will think about what "structure" is anyway, and I will end
with a cautionary note about how far we can expect to get when we think of
"networks" in a purely topological way.
Bio:
Cristopher Moore
received his B.A. in Physics, Mathematics, and Integrated Science from
Northwestern University, and his Ph.D. in Physics from Cornell. He has
published over 100 papers at the boundary between physics and computer science,
ranging from quantum computing, to phase transitions in NP-complete problems,
to the theory of social networks and efficient algorithms for analyzing their
structure. With Stephan Mertens, he is the author of The Nature of
Computation, published by Oxford University Press. He is a Professor at the
Santa Fe Institute.
Date: Thursday, March 28, 2013
Time: 11:00 am — 12:30 pm
Place: Centennial Engineering Room 1041
Nancy Amato
Department of Computer Science and
Engineering
Texas A&M University
ACM Distinguished Lecturer
Motion planning arises in many application domains such as computer
animation (digital actors), mixed reality systems and intelligent
CAD (virtual prototyping and training), and even computational biology
and chemistry (protein folding and drug design). Surprisingly,
one type of sampling-based planner, the probabilistic roadmap
method (PRM), has proven effective on problems from all these domains.
In this talk, we describe the PRM framework and give an overview of
some PRM variants developed in our group. We describe in more detail
our work related to virtual prototyping, crowd simulation, and protein
folding. For virtual prototyping, we show that in some cases a hybrid
system incorporating both an automatic planner and haptic user input
leads to superior results. For crowd simulation, we describe PRM-based
techniques for pursuit evasion, evacuation planning and architectural
design. Finally, we describe our application of PRMs to simulate
molecular motions, such as protein and RNA folding. More information
regarding our work, including movies, can be found at
http://parasol.tamu.edu/~amato/
Bio:
Nancy Amato
is Unocal Professor in the Department of Computer Science
and Engineering at Texas A&M University where she co-directs the Parasol Lab
and is a Deputy Director of the Institute for Applied Math and Computational
Science (IAMCS). She received undergraduate degrees in Mathematical
Sciences and Economics from Stanford University, and M.S. and Ph.D.
degrees in Computer Science from UC Berkeley and the University of Illinois
at Urbana-Champaign. She was an AT&T Bell Laboratories PhD Scholar, she
is a recipient of a CAREER Award from the National Science Foundation,
is a Distinguished Speaker for the ACM Distinguished Speakers Program,
was a Distinguished Lecturer for the IEEE Robotics and Automation Society,
and is an IEEE Fellow. She was co-Chair of the NCWIT Academic Alliance
(2009-2011), is a member of the Computing Research Association's Committee
on the Status of Women in Computing Research (CRA-W) and of the ACM,
IEEE, and CRA sponsored Coalition to Diversity Computing (CDC). Her main
areas of research focus are motion planning and robotics, computational
biology and geometry, and parallel and distributed computing. Current
representative projects include the development of a technique for
modeling molecular motions (e.g., protein folding), investigation of
new strategies for crowd control and simulation, and STAPL, a parallel
C++ library enabling the development of efficient, portable parallel
programs.
Date: Tuesday, March 19, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Diane Oyen
UNM Department of Computer Science
PhD Student
Machine learning algorithms for identifying dependency networks are being
applied to data in biology to learn protein correlations and neuroscience to
learn brain pathways associated with development, adaptation and disease. Yet,
rarely is there sufficient data to infer robust individual networks at each
stage of development or for each disease/control population. Therefore, these
multiple networks must be considered simultaneously; dramatically expanding the
space of solutions for the learning problem. Standard machine learning
objectives find parsimonious solutions that best fit the data; yet with limited
data, there are numerous solutions that are nearly score-equivalent. Effectively
exploring these complex solution spaces requires input from the domain scientist
to refine the objective function.
In this talk, I present transfer learning algorithms for both Bayesian networks
and graphical lasso that reduce the variance of solutions. By incorporating
human input in the transfer bias objective, the topology of the solution space
is shaped to help answer knowledge-based queries about the confidence of
dependency relationships that are associated with each population. I also
describe an interactive human-in-the-loop approach that allows a human to react
to machine-learned solutions and give feedback to adjust the objective function.
The result is a solution to an objective function that is jointly defined by the
machine and a human. Case studies are presented in two areas: functional brain
networks associated with learning stages and with mental illness; and plasma
protein concentration dependencies associated with cancer.
Bio:
Diane Oyen
received her BS in electrical and computer engineering from Carnegie Mellon
University. She then worked for several years designing ethernet controller
chips and teaching math before returning to academia. Currently, she is a PhD
Candidate advised by Terran Lane in computer science at the University of New
Mexico. Her broad research interests are in developing machine learning
algorithms to aid the discovery of scientific knowledge. She has focused on
using transfer learning in structure identification of probabilistic graphical
models learned from data with interaction from a human expert. She has been
invited to present her research at LANL and currently serves on the senior
program committee of AAAI.
Date: Thursday, March 7, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Tim Weninger
PhD Candidate
University of Illinois Urbana-Champaign
Graphs are all around us. They can be made to model countless real-world
phenomena ranging from the social to the scientific including engineering,
biology, chemistry, medical systems, and e-commerce systems. We call these
graphs information networks because they represent bits of information and their
relationships. This talk focuses on discovering roles and types in very large
scale information networks by exploring hierarchies inherent within the
networks. We focus on the Web-information network, as well as specialized
sub-networks like Wikipedia, where we aim to determine the type of a Web page or
Wiki page as well as its position in the type-hierarchy (e.g., professor,
student, and course exists within a department within a college) and their
relationships to each other. This new information can then used to answer
expressive queries on the network and allows us to explore additional properties
about the network that were previously unknown.
Bio:
Tim Weninger
is graduating from the Department of Computer Science at the University of
Illinois Urbana-Champaign where he is a member of the DAIS group and the Data
Mining Lab. His research interests are in large scale information network
analysis, especially on the Web, as well as "big data"-bases, "big
data"-mining, information retrieval and social media. Tim is a recipient of the
National Defense Science and Engineering Graduate Fellowship (NDSEG) and the
National Science Foundation Graduate Research Fellowship (NSF GRFP). He has
been an invited speaker at many international venues and has served as a
reviewer, external reviewer or PC member for dozens of international journals,
conferences and workshops.
Date: Tuesday, March 5, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Todd Hester
Post-doctoral researcher and
research educator
Department of Computer Science
University of Texas at Austin
Robots have the potential to solve many problems in society, because of their
ability to work in dangerous places doing necessary jobs that no one wants or is
able to do. One barrier to their widespread deployment is that they are mainly
limited to tasks where it is possible to hand-program behaviors for every
situation that may be encountered. For robots to meet their potential, they need
methods that enable them to learn and adapt to novel situations that they were
not programmed for. Reinforcement learning (RL) is a paradigm for learning
sequential decision making processes and could solve the problems of learning
and adaptation on robots.
While there has been considerable research on RL, there has been relatively
little research on applying it to practical problems such as controlling robots.
In particular, for an RL algorithm to be applicable to such problems, it must
address the following four challenges: 1) learn in very few actions; 2) learn in
domains with continuous state features; 3) handle sensor and/or actuator delays;
and 4) continually select actions in real time. In this talk, I will present the
TEXPLORE algorithm, which is the first algorithm to address all four of these
challenges. I will present results showing the ability of the algorithm to learn
to drive an autonomous vehicle at various speeds. In addition, I will present my
vision for developing more useful robots through the use of machine learning.
Bio:
Todd Hester
is a post-doctoral researcher and research educator in the Department of
Computer Science at the University of Texas at Austin. He completed his Ph.D.
at UT Austin in December 2012 under the supervision of Professor Peter Stone.
His research is focused on developing new reinforcement learning methods that
enable robots to learn and improve their performance while performing tasks.
Todd instructs an undergraduate course that introduces freshmen to research on
autonomous intelligent robots. He has been one of the leaders of UT
Austin's RoboCup team, UT Austin Villa, which won the international robot
soccer championship from a field of 25 teams in 2012.
Date: Thursday, February 28, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Kevin Small
Tufts University
Research Scientist
Machine learning and data mining methods have emerged as cornerstone
technologies for transforming the deluge of data generated by modern society
into actionable intelligence. For applications ranging from business
intelligence to public policy to clinical guidelines, the overarching goal of
"big data" analytics is to identify, analyze, and summarize the
available evidence to support decision makers. While ubiquitous computing has
greatly simplified data collection, successful deployment of machine learning
techniques is also generally predicated on obtaining sufficient quantities of
human-supplied annotations. Accordingly, judicious use of human effort in these
settings is crucial to building high-performance systems in a cost-effective
manner.
In this talk, I will describe methods for reducing annotation costs and
improving system performance via interactive learning protocols. Specifically,
I will present models capable of exploiting domain-expert knowledge through the
use of labeled features -- both within the active learning framework to
explicitly reduce the need for labeled data during training and the more general
setting of improving classifier performance in high-expertise domains.
Furthermore, I will contextualize this work within the scientific systematic
review process, highlighting the importance of interactive learning protocols in
a particular scenario where information must be reliably extracted from multiple
information sources, synthesized into a cohesive report, and updated as new
evidence is made available in the scientific literature. I will demonstrate
that we can partially automate many of the aspects of this important task, thus
reducing the costs incurred when interacting with highly-trained experts.
Bio:
Kevin Small
received his Ph.D. degree in computer science from the University of Illinois
at Urbana-Champaign (Cognitive Computation Group) in 2009. From 2009 to 2012,
he held positions as a postdoctoral researcher at Tufts University (Machine
Learning Group) and as a research scientist at Tufts Medical Center (Center for
Evidence-based Medicine). He is presently conducting research within the
Division of Program Coordination, Planning, and Strategic Initiatives at the
National Institutes of Health. Kevin's primary research interests are in
the areas of machine learning, data mining, natural language processing, and
artificial intelligence. Specifically, his research results concern using
interactive learning protocols to improve the performance of machine learning
algorithms while reducing sample complexity.
Date: Tuesday, February 26, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Trilce Estrada
University of Delaware
Post-doctoral Researcher
Nowadays, emerging distributed technologies enable the scientific community to
perform large-scale simulations at a rate never seen before. The pressure those
systems put on the scientists is twofold. First, they need to analyze the
massive amount of data generated as a consequence of those computations. Second,
scientists need to make sure they achieve meaningful scientific conclusions with
the available resources, oftentimes by changing the course of an experiment at
run-time. The first challenge implies the need of new and more efficient
clustering and classification techniques that require at most linear time with
respect to the amount of data generated. While the second challenge needs
algorithms able to build knowledge from the data and make decisions on the fly,
in a time-sensitive scenario.
In this talk I will present scalable algorithms that address both challenges;
the first one in the context of a high-throughput protein-ligand docking
application, and the second in the context of a Volunteer Computing system. I
will conclude the talk with future directions of my research including an
application for cancer detection that uses crowd sourcing to build its knowledge
incrementally.
Bio:
Trilce Estrada
is currently a post-doctoral researcher in the Computer and Information Science
Department at the University of Delaware, where she earned her PhD in 2012. Her
research includes real-time decision-making for high-throughput multi-scale
applications, scalable analysis of very large molecular datasets for drug
design, and emulation of heterogeneous distributed systems for performance
optimization. Trilce earned her MS in Computer Science and BS in Informatics
from INAOE and Universidad de Guadalajara, Mexico, respectively. She is an
active advocate of women in computing and current mentor of CISters@UD, a
student initiative that promotes the participation of women in
technology-related fields at her university.
Date: Thursday, February 21, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Bonnie Kirkpatrick
University of British Columbia
Post-doctoral Researcher
Probabilistic models are common in biology. Many of the successful
models have been readily tractable, leaving calculations on models with
a combinatorial-sized state space as an open problem. This talk
examines two kinds of models with combinatorial state spaces:
continuous-time and discrete-time Markov chains. These models are
applied to two problems: RNA folding pathways and family genetics.
While the applications are disparate topics in biology, they are
related via their models, the statistical quantities of interest, and
in some cases the computational techniques used to calculate those
quantities.
Bio:
Bonnie Kirkpatrick
is from Montana, a state where the population
density is one person per square mile. She attended Montana State
University for her undergraduate degree in computer science, before
moving to California. Once there, she completed her doctoral
dissertation on "Algorithms for Human Genetics" under the supervision
of Richard M. Karp and received her Ph.D. in computer science. Now she
is at the University of British Columbia doing post-doctoral work with
Anne Condon in the Department of Computer Science.
Date: Tuesday, February 19, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Srikanth V. Krishnamurthy
Professor of Computer Science
University of California, Riverside
There has been an explosion both in smartphone sales and usage on one
hand, and social network adoption on the other. Our work targets several
directions in (a) exploiting the smartphone resources in an appropriate
way for computing, information dissemination and sharing and storage and
(b) making social networks more usable by providing fine grained privacy
controls. In this talk, I present our recent work on mobile computing and
privacy in online social networks. Specifically, I will describe (a) how
one can go about building a distributed computing infrastructure using
smartphones and (b) how one can provision fine-grained privacy controls
with Twitter. Below, I provide a brief synopsis of the two projects.
Smartphone Cloud: Every night, a large number of idle smartphones are
plugged into a power source for recharging the battery. Given the
increasing computing capabilities of smartphones, these idle phones
constitute a sizeable computing infrastructure. Therefore, for a large
enterprise which supplies its employees with smartphones, we argue that a
computing infrastructure that leverages idle smartphones being charged
overnight is an energy-efficient and cost-effective alternative to running
tasks on traditional server infrastructure. Building a cloud with
smartphones presents a unique set of challenges that stem from
heterogeneities in CPU Clock speed, variability in network bandwidth and
low availability compared to servers. We address may of these challenges
to build CWC -- a distributed computing infrastructure using smartphones.
Twitsper: User privacy has been an increasingly growing concern in online
social networks (OSNs). While most OSNs today provide some form of privacy
controls so that their users can protect their shared content from other
users, these controls are typically not sufficiently expressive and/or do
not provide fine-grained protection of information. We consider the
introduction of a new privacy control---group messaging on Twitter, with
users having fine-grained control over who can see their messages.
Specifically, we demonstrate that such a privacy control can be offered to
users of Twitter today without having to wait for Twitter to make
changes to its system. We do so by designing and implementing Twitsper, a
wrapper around Twitter that enables private group communication among
existing Twitter users while preserving Twitter's commercial interests.
Our design preserves the privacy of group information (i.e., who
communicates with whom) both from the Twitsper server as well as from
undesired users. Furthermore, our evaluation shows that our implementation
of Twitsper imposes minimal server-side bandwidth requirements and incurs
low client-side energy consumption.
Bio:
Srikanth V. Krishnamurthy
received his Ph.D degree in electrical and
computer engineering from the University of California at San Diego in
1997. From 1998 to 2000, he was a Research Staff Scientist at the
Information Sciences Laboratory, HRL Laboratories, LLC, Malibu, CA.
Currently, he is a Professor of Computer Science at the University of
California, Riverside. His research interests are in wireless networks,
online social networks and network security. Dr. Krishnamurthy is the
recipient of the NSF CAREER Award from ANI in 2003. He was the
editor-in-chief for ACM MC2R from 2007 to 2009. He is a Fellow of the
IEEE.
Date: Thursday, February 14, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Abdullah Mueen
Cloud and Information Services Lab of Microsoft
Data mining and knowledge discovery algorithms for time series data use
primitives such as bursts, motifs, outliers, periods etc. as features. Fast
algorithms for finding these primitive features are usually approximate whereas
exact ones are very slow and therefore never used on real data. In this talk, I
present efficient and exact algorithms for two time series primitives, time
series motifs and shapelets. The algorithms speed up the exact search for motifs
and shapelets by efficient bounds based on triangular inequality. The algorithms
are much faster than the trivial solutions and successfully discover motifs and
shapelets of real time series from diverse sensors such as EEG, ECG,
Accelerometers and Motion captures. I present case studies on some of these data
sources and end with promising directions for new and improved primitives.
Bio:
Abdullah Mueen
has earned his PhD in computer science at the University of California,
Riverside in 2012. His adviser was Professor Eamonn Keogh. He is primarily
interested in designing primitives for time series data mining. In addition, he
has experiences on working with different forms of data such as XML, DNA,
spectrograms, images and trajectories. He has published his work in the top
data mining conferences including KDD, ICDM and SDM. His dissertation has been
selected as the runner-up in the SIGKDD Doctoral Dissertation Award in 2012.
Presently he is a scientist in the Cloud and Information Services Lab of
Microsoft and works on telemetry analytics.
Date: Tuesday, February 12, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Claire Le Goues
PhD Candidate
University of Virginia
"Everyday, almost 300 bugs appear...far too many for only the Mozilla
programmers to handle" --Mozilla developer, 2005
Software quality is a pernicious problem. Although 40 years of software
engineering research has provided developers considerable debugging support,
actual bug repair remains a predominantly manual, and thus expensive and
time-consuming, process. I will describe GenProg, a technique that uses
evolutionary computation to automatically fix software bugs. My empirical
evidence demonstrates that GenProg can quickly and cheaply fix a large
proportion of real-world bugs in open-source C programs. I will also briefly
discuss the atypical evolutionary search space of the automatic program repair
problem, and the ways it has challenged assumptions about software defects.
Bio:
Claire Le Goues
is a Ph.D. candidate in Computer Science at the University
of Virginia. Her research interests lie in the intersection of software
engineering and programming languages, with a particular focus on software
quality and automated error repair. Her work on automatic program repair has
been recognized with Gold and Bronze designations at the 2009 and 2012 ACM
SIGEVO "Humies" awards for Human-Competitive Results Produced by Genetic and
Evolutionary Computation and several distinguished and featured paper awards.
Date: Tuesday, February 5, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Yihua He
Yahoo
In the era of cloud computing, a hierarchal network design in a
traditional data center can no longer keep up with the requirements in
terms of increased bandwidth and different traffic characteristics. In
this talk, I will first present the challenges that a traditional data
center network architecture faces. I'll then share the rationality in
choosing and designing the next generation network architecture to
address those challenges and bring 10G network to the host level.
Finally, I'm going to present the demand of a SDN-based solution to
deploy, monitor and troubleshoot this new network architecture which
comes with vastly increased number of switches, viable routes and
configuration changes.
Bio:
Yihua He
is a member of the technical staff at Yahoo, where he is
involved in the architecture, design and automation of large scale
next-generation network infrastructures. He has numerous technical
publications in the area of Internet routing, topology, measurement
and simulation. He is a reviewer for a number of computer networking
journals and conferences. Prior to joining Yahoo, he was a graduate
student in University of California, Riverside, where he received his
PhD degree in computer science in 2007.
Date: Thursday, January 31, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Kunjumon Vadakkan
University of Manitoba, Canada
Intelligence is often considered a secondary manifestation resulting from the
abilities to memorize. What is the biological mechanism of memories? Current
biological experiments are relying on specific behavioral motor outputs of
spoken language or locomotion as measures of retrieved memories. But what
exactly are memories? If we view memories as virtual internal sensations formed
within the nervous system at the time of memory retrieval, how can we make
further investigations? In other words, can we study the virtual sensory
qualities of the internal sensations of memory? We examined possible basic units
of virtual internal sensations of memory at the time of its retrieval,
hypothesized re-activable cellular changes from which they can occur and traced
the locations of these cellular changes back to the time of associative learning
for feasible operational mechanisms. However, it is difficult to prove operation
of such mechanism in biological systems. Exploration of this will only be
achieved by carrying out the gold standard test of its replication in physical
systems. Engineering challenges in this approach include devising methods to
convert the first person perspective of internal sensations to appropriate
readouts. Experiments to translate theoretically feasible neuronal mechanisms of
its formation both by computational and engineering methods are required. I will
explain a possible biological mechanism with substantiating evidences and will
provide a broad outline of both computational and engineering methods to test
the operation in physical systems. There are challenges ahead; but a
collaborative efforts between Neurosciences and Physical and Engineering
sciences can take further steps.
Bio:
Kunjumon Vadakkan
is interested in understanding how internal sensations are created from
neuronal activities. Specific features of some of the diseases are likely to
provide clues to understand the normal functioning of the nervous system from
which formation of internal sensations may be understood. After graduating
Medicine in 1988 and practicing family medicine for a short period, Dr.
Vadakkan completed the MD
program in Biochemistry at the Calicut University, India. This was followed by
a Research Associate position at the Jawaharlal Nehru University, New Delhi to
study negative regulatory elements upstream of p53 gene. He moved to Canada in
1999, did MSc (under Dr.Umberto DeBoni) and PhD (under Dr. Min Zhuo) from the
University of Toronto. Later, he did post-doctoral training in Dr. Mark
Zylka's laboratory at the University of North Carolina, Chapel Hill.
Currently, he is a 4th year Resident in Neurology at the University of
Manitoba.
Date: Tuesday, January 22, 2013
Time: 11:00 am — 11:50 am
Place: Mechanical Engineering 218
Mark Hoemmen
Sandia National Laboratories
USA
Protecting arithmetic and data from corruption due to hardware errors
costs energy. However, energy increasingly constrains modern computer
hardware, especially for the largest parallel computers being built
and planned today. As processor counts continue to grow, it will
become too expensive to correct all of these "soft errors" at system
levels, before they reach user code. However, many algorithms only
need reliability for certain data and phases of computation, and can
be designed to recover from some corruption. This suggests an
algorithm / system codesign approach. We will show that if the system
provides a programming model to applications that lets them apply
reliability only when and where it is needed, we can develop
"fault-tolerant" algorithms that compute the right answer despite
hardware errors in arithmetic or data. We will demonstrate this for a
new iterative linear solver we call "Fault-Tolerant GMRES" (FT-GMRES).
FT-GMRES uses a system framework we developed that lets solvers
control reliability per allocation and provides fault detection. This
project has also inspired a fruitful collaboration between numerical
algorithms developers and traditional "systems" researchers. Both of
these groups have much to learn from each other, and will have to
cooperate more to achieve the promise of exascale.
Bio:
Mark Hoemmen
is a staff member at Sandia National Laboratories in
Albuquerque. He finished his PhD in computer science at the
University of California Berkeley in spring 2010. Mark has a
background in numerical linear algebra and performance tuning of
scientific codes. He is especially interested in the interaction
between algorithms, computer architectures, and computer systems, and
in programming models that expose the right details of the latter two
to algorithms. He also spends much of his time working on the
Trilinos library of (trilinos.sandia.gov).
Date: Friday, December 7, 2012
Time: 12:00 pm — 12:50 pm
Place: Centennial Engineering Center 1041
Zheng Cui
Department of Computer Science
University of New Mexico
Overlay-based virtual networking provides a powerful model for realizing virtual
distributed and parallel computing systems with strong isolation, portability,
and recoverability properties. However, in extremely high throughput and low
latency networks, such overlays can suffer from bandwidth and latency
limitations, which is of particular concern if we want to apply the model in HPC
environments. Through careful study of an existing very high performance
overlay-based virtual network system, we have identified two core issues
limiting performance: delayed and/or excessive virtual interrupt delivery into
guests, and copies between host and guest data buffers done during
encapsulation. We respond with two novel optimizations: optimistic, timer-free
virtual interrupt injection, and zero-copy cut-through data forwarding. These
optimizations improve the latency and bandwidth of the overlay network on 10
Gbps interconnects, resulting in near-native performance for a wide range of
microbenchmarks and MPI application benchmarks.
Bio:
Zheng Cui
is a PhD student in the Department of Computer Science at the University of New
Mexico. Her research interests include virtualization, virtual networking, and
HPC.
Date: Friday, November 30, 2012
Time: 12:00 pm — 12:50 pm
Place: Centennial Engineering Center 1041
Thomas P. Caudell
Depts. of ECE, CS, and Psychology
University of New Mexico
Agent based simulation has proven itself as a valuable tool in the study of group
dynamics. Agents range from particles to ants to robots to people to societies.
Often, individual agent behavior is controlled by rule sets or statistical
learning algorithms. In this talk, I will describe an aspect of our research
that embeds biologically motivated artificial neural network architectures into
agents that are endowed with a rich set of sensors and actuators. These agents
reside in a 2D virtual Flatland where we are able conduct experiments that
measure their performance as a function of neural architecture. I will begin
with an introduction to neural networks, describe the simulated agents and
Flatland, and then work through a series of architectures from simple to
complex, describing their operation and the effects they have on agent behavior.
I will end with a discussion of future directions in this type of research.
Bio:
Thomas P. Caudell
was appointed to direct UNM's Center for High Performance Computing beginning in
February 2007. Promoted to full professor in 2007, Dr. Caudell's research
interests include neural networks, virtual reality, machine vision, robotics and
genetic algorithms. He teaches courses in programming, computer games, neural
networks, virtual reality, computer graphics and pattern recognition. He has
been active in the field of virtual reality and neural networks since 1986, has
more than 75 publications in these areas, and in 1993 helped organize IEEE.s
first Virtual Reality Annual International Symposium. He is also an active
member of the IEEE, the International Neural Network Society, and the
Association for Computing Machinery.
Date: Friday, November 2, 2012
Time: 12:00 pm — 12:50 pm
Place: Centennial Engineering Center 1041
Srikanth V. Krishnamurthy
Professor of
Computer Science
University of California, Riverside
There has been a recent explosion in the number of
applications, especially for mobile and social networking platforms.
This explosion raises a plurality of performance and security issues
that have to be adequately addressed. In this talk, I describe two
recent projects from my group, focussing on performance in the
wireless context and security in the social network context.
Specifically, I will describe our work on (i) auto-configuring WLANs
towards maximizing capacity, and (ii) building a distributed OSN
towards providing privacy with high availability. Common to the two
efforts, is the effective management of resources, either towards
maximizing performance in the presence of bandwidth constraints or
minimizing cost while guaranteeing both privacy and high availaibilty.
Below, I provide more details with regards to the two parts of my
talk.
The latest commercial WLAN products that have hit the market today are
based on 802.11n. 802.11n devices allow the use of channel bonding
wherein, two adjacent frequency bands can be combined to form a new,
wider band to facilitate high data rate transmissions. However, the
use of a wider band on one link can exacerbate the interference on
nearby links. Furthermore, surprisingly, CB does not always provide
benefits even in interference-free settings and can degrade
performance in some cases. We investigate the reasons for why this is
the case via extensive experiments. Based on the lessons learned, we
design, implement and evaluate ACORN, an auto-configuration framework
for 802.11n WLANs. ACORN integrates the functions of user association
and channel allocation, since our study reveals that they are tightly
coupled when CB is used. We showcase the performance benefits of ACORN
via extensive experiments.
Shifting gears, we look at the acute need for privacy in OSNs. Today,
OSNs are plagued with privacy concerns. While there are prior
solutions towards provisioning privacy, they either impose high costs
on users by using excessive resources on the cloud, or compromise the
timeliness of sharing of data, by storing it on personal devices. We
design and implement C-3PO, an architecture that explicitly allows
users to privately share content with both minimum cost and high
availability of content. Specifically, C-3PO guarantees the
confidentiality of shared content both from untrusted cloud and OSN
providers, and undesired users. It minimizes costs by only caching
data/metadata associated with recently shared content in the cloud,
while storing the rest (stale content) on user's machines. C-3PO is
flexible and can be used either a basis for a stand-alone
decentralized private OSN, or as an add on to existing OSNs. The
latter option is especially attractive since it allows users to
integrate C-3PO seamlessly, with the OSN interface they use today. We
demonstrate the viability of C-3PO via extensive measurement studies
on Facebook and a prototype implementation on top of Facebook.
Bio:
Srikanth V. Krishnamurthy
received his Ph.D degree in electrical
and computer engineering from the University of California at San
Diego in 1997. From 1998 to 2000, he was a Research Staff Scientist at
the Information Sciences Laboratory, HRL Laboratories, LLC, Malibu,
CA. Currently, he is a Professor of Computer Science at the University
of California, Riverside. His research interests are in wireless
networks, online social networks and network security. Dr.
Krishnamurthy is the recipient of the NSF CAREER Award from ANI in
2003. He was the editor-in-chief for ACM MC2R from 2007 to 2009. He is
a Fellow of the IEEE.
Date: Friday, October 19, 2012
Time: 12:00 pm — 12:50 pm
Place: Centennial Engineering Center 1041
Lance R. Williams
Department of
Computer Science
University of New Mexico
We show how expressions written in a functional programming language
can be robustly evaluated on a modular asynchronous
spatial computer by compiling them into a distributed virtual machine
comprised of reied bytecodes undergoing diusion and communicating
via messages containing encapsulated virtual machine states. Because the
semantics of the source language are purely functional, multiple instances
of each reied bytecode and multiple execution threads can coexist without
inconsistency in the same distributed heap.
Bio:
Lance R. Williams
received his BS degree in computer science from the Pennsylvania State
University and his MS and PhD degrees in computer science from the University of
Massachusetts. Prior to joining UNM, he was a post-doctoral scientist at NEC
Research Institute. His research interests include computer vision and graphics,
digital image processing, and neural computation.
Date: Friday, October 5, 2012
Time: 12:00 pm — 12:50 pm
Place: Centennial Engineering Center 1041
Rafael Fierro
Department of
Electrical & Computer Engineering
University of New Mexico
As advances in mechanics, drive technology, microelectronics, control and communications make mobile robots ever more capable and affordable, the deployment of robotics networks is becoming a reality. A team of robots equipped with a diverse set of sensors, radios and actuators offers numerous advantages over a single unit. Some of the potential advantages include increased fault tolerance, redundancy, greater area coverage, distributed sensing and coordinated manipulation of large objects. Achieving the desired group behavior requires adequate integration of control and decision making mechanisms, and communication protocols.
In this talk, I will describe approaches that enable prioritized sensing and make use of team of robotic agents with different capabilities when large search areas need to be investigated. A heterogeneous team allows for the robots to become specialized in their abilities and therefore accomplish sub-goals more efficiently which in turn makes the overall mission more efficient. Moreover, I will present our recent results on planning for robotic routers to establish a communication network that will allow human operators or other agents to communicate with remote base stations or data fusion centers. Finally, I will outline our current work on key methodologies that enable agile load transportation using micro UAVs.
Bio:
Rafael Fierro
is an Associate Professor of the Department of Electrical & Computer Engineering, University of New Mexico where he has been since 2007. He received a M.Sc. degree in control engineering from the University of Bradford, England and a Ph.D. degree in electrical engineering from the University of Texas-Arlington in 1997. Prior to joining UNM, he held a postdoctoral appointment with the GRASP Lab at the University of Pennsylvania and a faculty position with the Department of Electrical and Computer Engineering at Oklahoma State University. His research interests include nonlinear and adaptive control, robotics, hybrid systems, autonomous vehicles, and multi-agent systems. He directs the Multi-Agent, Robotics, Hybrid and Embedded Systems (MARHES) Laboratory. Rafael Fierro was the recipient of a Fulbright Scholarship, a 2004 National Science Foundation CAREER Award, and the 2007 International Society of Automation (ISA) Transactions Best Paper Award. He is serving as Associate Editor for the IEEE Control Systems Magazine and IEEE Transactions on Automation Science and Engineering.
Date: Friday, September 28, 2012
Time: 12:00 pm — 12:50 pm
Place: Centennial Engineering Center 1041
Jared Saia
Department of
Computer Science
University of New Mexico
Imagine we have a collection of agents, some of which are unreliable, and we want to build a reliable system. This fundamental problem is faced by many natural systems like social insect colonies, the brain and the immune system. A key component of these systems is that periodically all agents commit to a particular action. The Byzantine agreement problem formalizes the challenge of commitment by asking: Can a set of agents agree on a value, even if some of the agents are unreliable? Application areas of Byzantine agreement include: control systems, distributed databases, peer-to-peer systems, mechanism design, sensor networks, and trust management systems.
In this talk, we describe several recent results in Byzantine agreement. First, we describe an algorithm for Byzantine agreement that is scalable in the sense that each agent sends only O(sqrt(n)) bits, where n is the total number of agents. Second, we describe very efficient algorithms to solve Byzantine agreement in the case where all agents have access to a global coin. Finally, we describe a very recent result that gives an algorithm to solve Byzantine agreement in the presence of an adversary that is adaptive: the adversary can take over up to a third of the agents at any point during the execution of the algorithm. Our algorithm runs in expected polynomial time and is the first sub-exponential algorithm in this model.
Bio:
Jared Saia
is an Associate Professor of Computer Science at the University of New Mexico. His broad research interests are in theory and algorithms with a focus on designing distributed algorithms that are robust against a computationally unbounded adversary. He is the recipient of several grants and awards including an NSF CAREER award, School of Engineering Senior and Junior Faculty Research Excellence awards, and several best paper awards.
Date: Friday, September 21, 2012
Time: 12:00 pm — 12:50 pm
Place: Centennial Engineering Center 1041
Joe Kniss
Department of
Computer Science
University of New Mexico
This talk will cover recent research and results at UNM's Advanced Graphics Lab and Art Research Technology and Science Lab in the areas of visualization, robotics, and mayhem. We combine language, physics, mathematics, and human interaction to motivate novel CS research.
Bio:
Joe Kniss
has been an Assistant Professor in the Department of Computer Science at UNM since 2007. He is the Founding Director of UNM's Advanced Graphics Lab.
Date: Friday, September 7, 2012
Time: 12:00 pm — 12:50 pm
Place: Centennial Engineering Center 1041
Patrick Bridges
Department of
Computer Science
University of New Mexico
The oriental board game Go (Chinese: Wei'qi, Korean: Baduk, Japanese:
Igo) has long been of one of the most challenging board games for
computers to play. For example, computers are as strong or stronger
than the best human players, backgammon programs play at world
championship levels, and checkers is actually solved. In contrast,
computer Go programs have long been at best no stronger than an
average club player. This is no longer true. Relatively recent
advances in computer Go programs have resulted in dramatic advances in
computer strength. Computers can now hold their own against
professional players on reduced-size 9x9 boards, and hold their own
and beat reasonably strong (amatuer dan-level) human players.
In this talk, I describe why Go has historically been difficult for
computers to play well and the recent technical advances that have
enabled the large increases in computer Go program strength. As part
of this, I will also overview the basics of the game itself, and
present some recent examples of the growth in computer strength
(including one that involved a $10,000 bet). Finally, I will discuss
both the future prospects of computer Go play, and the broader
relevance of the techniques used to make strong computer Go programs
to computer science in general.
Bio:
Patrick Bridges
is an associate professor at the University of New Mexico in the Department
of Computer Science. He did his undergraduate work at Mississippi State
University and received his Ph.D. from the University of Arizona in December of
2002. His research interest broadly cover operating systems and networks
particularly, scaling, composition, and adaptation issues in large-scale
systems. He works with collaborators at Sandia, Los Alamos, and Lawrence
Berkeley National Laboratories, IBM Research, AT&T Research, and a variety of
universities.
Date: Friday, August 31, 2012
Time: 12:00 pm — 12:50 pm
Place: Centennial Engineering Center 1041
Matthew G. F. Dosanjh
Department of
Computer Science
University of New Mexico
Historically, scientific computing applications have been statically linked before running on massively parallel High Performance Computing (HPC) platforms. In recent years, demand for supporting dynamically linked applications at large scale has increased. When programs running at large scale dynamically load shared objects, they often request the same file from shared storage. These independent requests tax the shared storage and the network, causing a significant delay in computation time. In this paper, we propose to leverage a proven file sharing technique, BitTorrent, abstracted by an on-node FUSE interface to create a system-level distribution method for these files. We detail our proposed methodology, related work, and our current progress.
Bio:
Matthew G. F. Dosanjh
is a third year PhD student advised by Professor Patrick G. Bridges within the UNM Department of Computer Science. He received his bachelors degree in Computer Science from UNM in the spring of 2010. He decided to stay at UNM to pursue a PhD. His research interests center around high performance computing, particularly in scalability and resilience.
Date: Friday, August 24, 2012
Time: 12:00 pm — 12:50 pm
Place: Centennial Engineering Center 1041
Dewan Ibtesham
Department of
Computer Science
University of New Mexico
The increasing size and complexity of high performance
computing (HPC) systems have led to major concerns over fault
frequencies and the mechanisms necessary to tolerate these faults.
Previous studies have shown that state-of-the-field checkpoint/restart
mechanisms will not scale sufficiently for future generation systems.
Therefore, optimizations that reduce checkpoint overheads are
necessary to keep checkpoint/restart mechanisms effective. In this
work, we demonstrate that checkpoint data compression is a feasible
mechanism for reducing checkpoint commit latencies and storage
overheads. Leveraging a simple model for checkpoint compression
viability, we show: (1) checkpoint data compression is feasible for
many types of scientific applications expected to run on extreme
scale systems; (2) checkpoint compression viability scales with
checkpoint size; (3) user-level versus system-level checkpoints bears
little impact on checkpoint compression viability; and (4) checkpoint
compression viability scales with application process count. Lastly,
we describe the impact that checkpoint compression might have on
future generation extreme scale systems.
Bio:
Dewan Ibtesham
is a third year PhD student advised by Professor
Dorian Arnold within the UNM Department of Computer Science.
He received his bachelors degree in Computer Science and
Engineering from BUET (Bangladesh University of Engineering
Technology). After working two and a half years in the software industry,
he moved to the U.S. and started graduate school beginning fall 2009. His
research interests are generally in high performance computing and
large scale distributed systems; in particular, making sure that the
HPC systems are fault tolerant and reliable for users so that the
full potential of the systems are properly utilized.
Date: Friday, May 4, 2012
Time: 4:00 pm — 5:00 pm
Place: Centennial Engineering Center B146 (in the
basement)
Daniela Oliveira
Bowdoin College
In the last ten years virtual machines (VMs) have been
extensively used for security-related applications, such as intrusion
detection systems, malicious software (malware) analyzers and secure
logging and replay of system execution. A VM is high-level software
designed to emulate a computer's hardware. In the traditional usage
model, security solutions are placed in a VM layer, which has complete
control of the system resources. The guest operating system (OS) is
considered to be easily compromised by malware and runs unaware of
virtualization. The cost of this approach is the semantic gap problem,
which hinders the development and widespread deployment of
virtualization-based security solutions: there is significant
difference between the state observed by the guest OS (high level
semantic information) and by the VM (low level semantic information).
The guest OS works on abstractions such as processes and files, while
the VM can only see lower-level abstractions, such as CPU and main
memory. To obtain information about the guest OS state these
virtualization solutions use a technique called introspection, by
which the guest OS state is inspected from the outside (VM layer),
usually by trying build a map of the OS layout to an area of memory
where these solutions can analyze it. We propose a new way to perform
introspection, by having the guest OS, traditionally unaware of
virtualization, actively collaborate with a VM layer underneath it by
requesting services and communicating data and information as equal
peers in different levels of abstraction. Our approach allows for
stronger and more fine-grained and flexible security approaches to be
developed and it is no less secure than the traditional model, as
introspection tools also depend on the OS data and code to be
untampered to report correct results.
Bio:
Daniela Oliveira
is an Assistant Professor in the Department of
Computer Science at Bowdoin College. She received her PhD in Computer
Science in 2010 from the University of California at Davis where she
specialized in computer security and operating systems. Her current
research focuses on employing virtual machine and operating systems
collaboration to protect OS kernels against compromise. She is also
interested in leveraging social trust to help distinguishing benign
and malicious pieces of data. She is the recipient of the NSF CAREER
Award 2012.
Date: Thursday, May 3, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Carola Wenk
Associate Professor of Computer
Science
University of Texas at San Antonio
Geometric shapes are at the core of a wide range of application areas.
In this talk we will discuss how approaches from computational
geometry can be used to solve shape matching problems arising in a
variety of applications including biomedical areas and intelligent
transportation systems. In particular, we will discuss point pattern
matching algorithms for the comparison of 2D electrophoresis gels, as
well as algorithms to compare and process trajectories for improved
navigation systems and for live cell imaging.
Bio:
Carola Wenk
is an Associate Professor of Computer Science at the
University of Texas at San Antonio (UTSA). She received her PhD from Free University Berlin, Germany. Her research area is in algorithms and data structures, in particular geometric algorithms and
shape matching. She has 40 peer-reviewed publications, 22 with students, and she is actively involved in several applied
projects including topics in biomedical areas and in intelligent
transportation systems. She is the principal investigator on a $1.9M
NIH grant funding the Computational Systems Biology Core Facility at
UTSA. Dr. Wenk won an NSF CAREER award as well as research, teaching,
and service awards at UTSA. She is actively involved in service to the
university, including serving as the Chair of the Faculty Senate and
as the Faculty Advisor for two student organizations.
Date: Tuesday, May 1, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Pradeep Sen
Department of Electrical
and Computer Engineering
University of New Mexico
Image synthesis is the process of generating an image from a scene
description that includes geometry, material properties, and camera/light
positions. This is a central problem in many applications, ranging from
rendering images for movies/videogames to generating realistic
environments for training and tele-presence applications. The most
powerful methods for photorealistic image synthesis are based on Monte
Carlo (MC) algorithms, which simulate the full physics of light transport
in a scene by estimating a series of multi-dimensional integrals using a
set of random point samples. Although these algorithms can produce
spectacular images, they are plagued by noise at low sampling rates and
therefore require long computation times (as long as a day per image) to
produce acceptable results. This has made them impractical for many
applications and limited their use in real production environments. Thus,
solving this issue has become one of the most important open problems in
image synthesis and has been the subject of extensive research for almost
30 years.
In this talk, I present a new way to think about the source of Monte Carlo
noise, and propose how to identify it in an image using a small number of
computed samples. To do this, we treat the rendering system as a black
box and calculate the statistical dependency between the outputs and the
random parameter inputs using mutual information. I then show how we can
use this information with an image-space, cross-bilateral filter to remove
the MC noise but preserve important scene details. This process allows us
to generate images in a few minutes that are comparable to those that took
hundreds of times longer to render. Furthermore, our algorithm is fully
general and works for a wide range of Monte Carlo effects, including depth
of field, area light sources, motion blur, and path tracing. This work
opens the door to a new set of algorithms that make Monte Carlo rendering
feasible for more applications.
Bio:
Pradeep Sen
is an Assistant Professor in the Department of Electrical
and Computer Engineering at the University of New Mexico. He received his
B.S. in Computer and Electrical Engineering from Purdue University in 1996
and his M.S. in Electrical Engineering from Stanford University in 1998 in
the area of electron-beam lithography. After two years at a profitable
startup company which he co-founded, he joined the Stanford Graphics Lab
where he received his Ph.D. in Electrical Engineering in June 2006,
advised by Dr. Pat Hanrahan.
He joined the faculty at UNM in the Fall of 2006, where he founded the UNM
Advanced Graphics Lab. His core research combines signal processing
theory with computation and optics/light-transport analysis to address
problems in computer graphics, photography, and computational image
processing. He is the co-author of five ACM SIGGRAPH papers (three
at UNM) and has been awarded more than $1.7 million in research funding,
including an NSF CAREER award to study the application of sparse
reconstruction algorithms to computer graphics and imaging. He received
two best-paper awards at the Graphics Hardware conference in 2002 and
2004, and the Lawton-Ellis Award in 2009 and the Distinguished Researcher
Award in 2012, both from the ECE department at UNM. Dr. Sen has also
started a successful educational program at UNM, where his videogame
development program is now ranked by the Princeton Review as one of the
top 10 undergraduate programs in North America.
Date: Thursday, April 26, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Patrick Gage Kelley
Carnegie Mellon University
Users are increasingly expected to manage complex privacy settings in their normal online interactions. From shopping to social networks, users make decisions about sharing their personal information with corporations and contacts, frequently with little assistance. Current solutions require consumers to read long documents or control complex settings buried deep in management interfaces. Because these mechanisms are difficult to use and have limited expressiveness, users often have little to no effective control.
My goal is to help people cope with the shifting privacy landscape. My work
explores many aspects of how users make decisions regarding privacy, while my
dissertation focuses on two specific areas: online privacy policies and mobile
phone application permissions. I explored consumers' current understanding of
privacy in these domains, and then used that knowledge to iteratively design,
build, and test more comprehensible information displays. I simplified online
privacy policies through a "nutrition label" for privacy - a simple, standardized
label that helps consumers compare website practices and am currently working to
redesign the Android permissions display, which I have found to be incomprehensible to most users.
Bio:
Patrick Gage Kelley
is a Ph.D. candidate in Computation, Organizations, and Society at Carnegie
Mellon University's (CMU) School of Computer Science, who is co-advised by
Lorrie Faith Cranor and Norman Sadeh. His research centers on information
design, usability, and education involving privacy. He has worked on projects
related to passwords, location-sharing, privacy policies, mobile apps, Twitter,
Facebook relationship grouping, and the use of standardized, user-friendly
privacy displays. He also works with the CMU School of Art's STUDIO for
Creative Inquiry in new media arts and information visualization. For more see
http://patrickgagekelley.com
Date: Tuesday, April 24, 2012
Time: 11:00 am — 12:15 pm
Place: Centennial Engineering Center 1041 (NOTE
DIFFERENT LOCATION FROM USUAL LOCATION)
Stephen Checkoway
Computer Science & Engineering
University of California San Diego
The stereotypical view of computing, and hence computer
security, is a landscape filled with laptops, desktops, smartphones
and servers; general purpose computers in the proper sense. However,
this is but the visible tip of the iceberg. In fact, most computing
today is invisibly embedded into systems and environments that few of
us would ever think of as computers. Indeed, applications in
virtually all walks of modern life, from automobiles to medical
devices, power grids to voting machines, have evolved to rely on the
same substrate of general purpose microprocessors and (frequently)
network connectivity that underlie our personal computers. Yet along
with the power of these capabilities come the same potential risks as
well. My research has focused on understanding the scope of such
problems by exploring vulnerabilities in the embedded environment, how
they arise, and the shape of the attack surfaces they expose. In
this talk, I will particularly discuss recent work on two large-scale
platforms: modern automobiles and electronic voting machines. In each
case, I will explain how implicit or explicit assumptions in the
design of the systems have opened them to attack. I will demonstrate
these problems, concretely and completely, including arbitrary control
over election results and remote tracking and control of an unmodified
automobile. I will explain the nature of these problems, how they
have come to arise, and the challenges in hardening such systems going
forward.
Bio:
Stephen Checkoway
is a Ph.D. candidate in Computer Science and
Engineering at UC San Diego. Before that he received his B.S. from
the University of Washington. He is also a member of the Center for
Automotive Embedded Systems Security, a collaboration between UC San
Diego and the University of Washington. Checkoway's research spans a
range of applied security problems including the security of embedded
and cyber-physical systems, electronic voting, and memory safety
vulnerabilities.
Date: Monday, April 23, 2012
Time: 3:30 pm — 4:30 pm
Place: Centennial Engineering Center 1041 (NOTE
DIFFERENT LOCATION AND TIME)
Barton P. Miller
Computer Sciences Department
University of Wisconsin
Malware attacks necessitate extensive forensic analysis efforts that are
manual-labor intensive because of the analysis-resistance techniques that
malware authors employ. The most prevalent of these techniques are code
unpacking, code overwriting, and control transfer obfuscations. We
simplify the analyst's task by analyzing the code prior to its execution
and by providing the ability to selectively monitor its execution. We
achieve pre-execution analysis by combining static and dynamic techniques
to construct control- and data-flow analyses. These analyses form the
interface by which the analyst instruments the code. This interface
simplifies the instrumentation task, allowing us to reduce the number of
instrumented program locations by a hundred-fold relative to existing
instrumentation-based methods of identifying unpacked code. We implement
our techniques in SD-Dyninst and apply them to a large corpus of malware, performing analysis tasks such as code coverage tests and call-stack
traversals that are greatly simplified by hybrid analysis.
Bio:
Barton P. Miller
is a Professor of Computer Sciences at the University of Wisconsin, Madison. He received his B.A. degree from the University of California, San Diego in 1977, and M.S. and Ph.D. degrees in Computer Science from the University of California, Berkeley in 1980 and 1984.
Professor Miller is a Fellow of the ACM.
Date: Tuesday, April 17, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Manuel Egele
University of California,
Santa Barbara
Two complementing approaches exist to analyze potentially malicious software
(malware); static and dynamic analysis. Static analysis reasons about the
functionality of the analyzed application by analyzing the program's code in
source, binary, or any intermediate representation. In contrast, dynamic
analysis monitors the execution of an application and the effects the
application has on the execution environment. In this talk I will present a
selection of my research in both areas -- static and dynamic analysis.
On commodity x86 computer systems the browser has become a central hub of
activity and information. Hence, a plethora of malware exists that tries to
access and leak the sensitive information stored in the browser's context.
Accordingly, I will present the research and results form my dynamic analysis
system (TQANA) targeting malicious Internet Explorer plugins. TQANA implements
full system data-flow analysis to monitor the propagation of sensitive data
originating from within the browser. This system successfully detects a variety
of spyware components that steal sensitive data (e.g., the user's browsing history) from the browser.
In the mobile space, smartphones have become similar hubs for online
communication and private data. The protection of this sensitive data is of
great importance to many users. Therefore, I will demonstrate how my system
(PiOS) leverages static binary analysis to detect privacy violations in
applications targeted at Apple's iOS platform. PiOS automatically detects a
variety of privacy breaches, such as the transmission of GPS coordinates, or
leaked address books. Applications that transmit address book contents recently
got in the focus of mainstream media as many popular social network applications
(e.g., Path, Gowalla, or Facebook) transmit a copy of the user's address book to
their backend servers. The static analysis in PiOS is also the foundation for a
dynamic enforcement system that implements control-flow integrity (CFI) on the
iOS platform. Thus, this system is suitable to prevent the broad range of
control flow diverting attacks on the iOS platform.
Bio:
Manuel Egele
currently is a post-doctoral researcher at the Computer Security Group at the
Department of Computer Science of the University of California, Santa Barbara.
Hereceived his Ph.D. in January 2011 from the Vienna University of Technology
under his advisors Christopher Kruegel and Engin Kirda. Before starting his
work as a post-doc he visited the Computer Security Group at UCSB as part of
his Ph.D. studies. Similarly, he spent six months visiting the iSeclab's
research lab in France (i.e., Institute Eurecom). He was very fortunate to meet
and work with interesting and smart people at all these locations.
His research interests include most aspects of systems security, such as mobile security, binary and malware analysis, and web security.
Since 2009 he has helped organizing UCSB's iCTF. In 2010 they were the first CTF
that featured a challenge with effects on the physical world (i.e., the teams
had to control a foam missile launcher). In 2011 they took this concept one step
further and teams from around the globe could remote control a unmaned areal
vehicle in the conference room of UCSB's Computer Science Department. Before
being part of the organzing team for the iCTF he participated as part of the
We_0wn_Y0u team of the Vienna University of Technology, as well as on the team
of the Institute Eurecom. Furthermore, he participated as part of the Shellphish
team at several DefCon CTF competitions in Las Vegas.
Date: Tuesday, April 10, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Mohit Tiwari
University of California,
Berkeley
The synergy between computer architecture and program analysis can reveal vital insights into the design of secure systems. The ability to control information as it flows through a machine is a key primitive for computer security, however, software-only analyses are vulnerable to leaks in the underlying hardware. In my talk, I will demonstrate how complete information flow control can be achieved by co-designing an analysis together with the processor architecture.
The analysis technique, GLIFT, is based on the insight that all information flows -- whether explicit, implicit, or timing channels -- look surprisingly alike at the gate level where assembly language descriptions crystallize into precise logical functions. The architecture introduces Execution Leases, a programming model that allows a small kernel to directly control the flow of all secret or untrusted information, and whose implementation is verifiably free from all digital information leaks. In the future, my research will use this cross-cutting approach to build systems that make security and privacy accessible to mainstream users while supporting untrusted applications across cloud and client devices.
Bio:
Mohit Tiwari
is a Computing Innovation Fellow at University of California, Berkeley. He received his PhD in Computer Science from University of California, Santa Barbara in 2011. His research uses computer architecture and program analyses to build secure, reliable systems, and has received a Best Paper award at PACT 2009, an IEEE Micro Top Pick in 2010, and the Outstanding Dissertation award in Computer Science at UCSB in 2011.
Date: Thursday, April 5, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Gruia-Catalin Roman
University of New Mexico
Dean
of the School of Engineering
Mobile computing is a broad field of study made possible by advances
in wireless technology, device miniaturization, and innovative
packaging of computing, sensing, and communication resources. This
talk is intended as a personal intellectual journey spanning a decade
of research activities, which have been shaped by the concern with
rapid development of applications designed to operate in the fluid and
dynamic settings that characterize mobile and sensor networks. The
presence of mobility often leads to fundamental changes in our
assumptions about the computing and communication environment and
about its relation to the physical world and the user community.
This, in turn, can foster a radical reassessment of one's perspective
on software system design and deployment. Several paradigm shifts made
manifest by considerations having to do with physical and logical
mobility will be examined and illustrated by research involving formal
models, algorithms, middleware, and protocols. Special emphasis will
be placed on problems that entail collaboration and coordination in
the mobile setting.
Bio:
Gruia-Catalin Roman
was born in Bucharest, Romania, he studied general engineering topics for two
years at the Polytechnic Institute of Bucharest and became the beneficiary of a
Fulbright Scholarship. In the fall of 1971, Roman entered the very first
computer science freshman class at the University of Pennsylvania. In the years
that followed, he earned B.S. (1973), M.S. (1974), and Ph.D. (1976) degrees,
all in computer science. At the age of 25, he began his academic career as
Assistant Professor at Washington University in St. Louis. In 1997, Roman was
appointed department head. Under his leadership, the Department of Computer
Science and Engineering experienced a dramatic transformation in faculty size,
level of research activities, financial strength, and reputation. In 2004, he
was named the Harold B. and Adelaide G. Welge Professor of Computer Science at
Washington University. On July 1, 2011, he became the 18th dean of the
University of New Mexico School of Engineering. His aspirations as dean are
rooted in his conviction that engineering and computing play central and
critical roles in facilitating social and economic progress. Roman sees the
UNM School of Engineering as being uniquely positioned to enable scientific
advances, technology transfer, and workforce development on the state,
national, and international arenas in ways that are responsive to both
environmental and societal needs and that build on the rich history, culture,
and intellectual assets of the region.
Date: Tuesday, April 3, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Amitabh Trehan
Technion, Haifa, Israel
Consider a simple turn based game between an attacker and a defender (you) playing on a large connected graph: In her turn, the attacker deletes a node and in your turn you are supposed to connect all the neighbors of the deleted node so that somehow at any point in the game, no node has increased its degree by more than a constant nor has the diameter of the network blown up. Now, consider that the nodes themselves are smart computers or agents and do not know anything about their network other than their 'nearby' nodes and have no centralized help; In essence they have to maintain certain local and global properties by only local actions while under attack from a powerful adversary.
The above game captures the essence of distributed self-healing in reconfigurable networks (e.g. peer-to-peer, ad-hoc and wireless mesh networks etc). Many such challenging and interesting scenarios arise in this context. We will look at some of these scenarios and at our small but rich and evolving body of work. Our algorithms simultaneously maintain a subset of network properties such as connectivity, degree, diameter, stretch, subgraph density, expansion and spectral properties. Some of our work uses the idea of virtual graphs - graphs consisting of 'virtual' nodes simulated by the real nodes, an idea that we will look at in more detail.
Bio:
Amitabh Trehan
is a postdoc at Technion, Haifa, Israel. There, he works with Profs. Shay Kutten and Ron Lavi on distributed algorithms and game theory. He has earlier also worked as a postdoc with Prof. Valerie King (at UVIC, Canada). He did his Ph.D. with Prof. Jared Saia at UNM on algorithms for self-healing networks.
His broad research interests are in theory and algorithms with specific interests in distributed algorithms, networks, and game theory.His interest includes designing efficient distributed algorithms for robustness/self-healing/self-* properties in systems under adversarial attack, and studying game theoretic and other mechanisms for evolving networks, such as social networks or distributed systems (P2P networks etc).
Date: Tuesday, March 27, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Jeremy Epstein
SRI
International
Internet voting is in the headlines, frequently coupled with the question "if I
can bank online and shop online why can't I vote online." This presentation will
describe the range of systems that fall under the name "internet voting,"
explain the security issues in today's internet voting systems, recommend what
can and can't be done safely, discuss limitations of experimental systems, and point to future directions and areas for research.
Bio:
Jeremy Epstein
is Senior Computer Scientist at SRI International in Arlington, VA where his
research interests include voting systems security and software assurance.
Prior to joining SRI, Jeremy led product security for an international software
vendor. He's been involved with varying aspects of security for over 20 years.
He is Associate Editor in Chief of IEEE Security & Privacy magazine, an
organizer of the Annual Computer Security Applications Conference, and serves
on too many program committees. Jeremy grew up in Albuquerque where he attended
Sandia High School and UNM (part time while in high school), before fleeing the
big city to earn a B.S. from New Mexico Tech in Computer Science, followed by
an M.S. from Purdue University. He's lived in Virginia for 25 years, and misses green chile every day.
Date: Tuesday, March 20, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Brian Danielak
University of Maryland, College Park
Students can take remarkably different paths toward the development of design
knowledge and practice. Using data from a study of an introductory programming
course for electrical engineers, we investigate how students learn elements of
design in the course, and how their code (and the process by which they generate
it) reflects what they're learning about design. Data are coordinated across
clinical interviews, ethnographic observation, and fine-grained evolution of
students' code, exploring the question of what it means to "know" design
practices common to programming, such as functional abstraction and hierarchical decomposition.
Bio:
Brian Danielak
is currently a fourth-year Ph.D. student in Science Education Research at the
University of Maryland. At the moment, he studies how university engineering
students engage in mathematical and physical sensemaking in their courses. He
works under Ayush Gupta, and his advisor Andy Elby. His research interests
include mathematical sensemaking and symbolic reasoning, representational
competency in scientific argumentation, students' epistemological beliefs in
science and mathematics, and interplays of emotion, cognition, and student epistemology. He graduated from the University of
Buffalo Honors Program, with degrees in Chemistry (BA, 2007) and English (BA,
2007). While there, he worked as an undergraduate research fellow with Kenneth
Takeuchi. He also completed an Undergraduate Honors Thesis on the relationships
between narrative and science under the direction of Robert Daly.
Date: Tuesday, March 6, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Suzanne Kelly
Sandia National Lab
Sandia National Laboratories has a long history of successfully
applying high performance computing (HPC) technology to solve
scientific problems. We drew upon our experiences with numerous
architectural and design features when planning our most recent
computer systems. This talk will present the key issues that were
considered. Important principles are performance balance between the
hardware components and scalability of the system software. The talk
will conclude with lessons learned from the system deployments.
Bio:
Suzanne Kelly
is a distinguished member of technical staff at
Sandia National Laboratories. Suzanne holds a BS in computer science
from the University of Michigan and an MS in computer science from
Boston University. Suzanne has worked on projects related to
system-level software as well as information systems. In addition to
her project management activities, she currently has responsibility
for the system software on the Cielo supercomputer. Her previous
assignments were leading the operating system teams for the Red Storm
and ASCI Red supercomputers. Prior to her 6-year sojourn in
information systems for nuclear defense technologies, she worked on
various High Performance Computing file archive systems.
Date: Thursday, March 1, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
William G. Griswold
Department of Computer Science & Engineering
University of California, San Diego
Recent revelations about the impact of air pollution on our health are
troubling, yet air pollution and the risks it poses to us are largely
invisible. Today, the infrastructure of our regulatory institutions is
inadequate for the cause: sensors are few and often far from where we live. What
about the air quality on your jogging route or commute? Can you be told when it
matters most? Recent advances in computing technology put these capabilities
within reach. By pervasively monitoring our immediate environs, aggregating the
data for analysis, and reflecting the results back to us quickly, we can avoid
toxic locales, appreciate the consequences of our behavior, and together seek a
mandate for change. In this talk, I describe CitiSense, which leverages the
proliferation of mobile phones and the advent of cheap, small sensors to develop
a new kind of .citizen infrastructure.. We have built a robust end-to-end
prototype system, exposing an abundance of challenges in power management,
software architecture, privacy, inference with "noisy" commodity sensors, and
interaction design. The most critical challenge is providing an always-on
experience when depending on the personal devices of users. I report on early
research results, including those of our first user study, which reveal the
incredible potential for participatory sensing of air quality, but also open
problems.
Bio:
William G. Griswold
is a Professor of Computer Science and Engineering at UC
San Diego. He received his Ph.D. in Computer Science from
the University of Washington in 1991, and his BA
in Mathematics from the University of Arizona in 1985. His research
interests include ubiquitous computing and software engineering, and
educational technology. Griswold is a pioneer in the area of software
refactoring. He also built ActiveCampus, one of the early mobile
location-aware systems. His current CitiSense project is investigating
technologies for low-cost ubiquitous real-time air-quality sensing.
He was PC Chair of SIGSOFT FSE in 2002 and PC co-Chair of ICSE in 2005.
He is the current past-Chair of ACM SIGSOFT. He is a member of the ACM and
the IEEE Computer Society.
Date: Tuesday, February 21, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Date: Tuesday, February 28, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Elizabeth Jessup
University of Colorado
Department of Computer Science
Linear algebra constitutes the most time-consuming part of simulations
in many fields of science and engineering. Reducing the costs of
those calculations can have a significant impact on overall routine
performance, but such optimization is difficult. At each step of
the process, the code developer is confronted with many possibilities.
Choosing between them generally requires expertise in numerical
computation, mathematical software, compilers, and computer
architecture, yet few scientists have such broad expertise. This
talk will cover two interrelated collaborative projects focused on
easing the production of high-performance matrix algebra software.
I will first present work in progress on a taxonomy of software
that can be used to build highly-optimized matrix algebra software.
The taxonomy will provide an organized anthology of software
components and programming tools needed for that task. It will serve
as a guide to practitioners seeking to learn what is available for
their programming tasks, how to use it, and how the various parts
fit together. It will build upon and improve existing collections
of numerical software, adding tools for the tuning of matrix algebra
computations. Our objective is to build a taxonomy that will provide
all of the software needed to take a matrix algebra problem from
algorithm description to a high-performance implementation.
I will then introduce one of the tuning tools to be included in the
taxonomy, the Build to Order (BTO) compiler which automates loop
fusion in matrix algebra kernels. This optimization serves to reduce
the amount of data moved between memory and the processor. In
particular, I will describe BTO's analytic memory model which
accelerates the compiler by substantially reducing the number of
loop fusion options processed by it. The initial draft of the model
took into account traffic through the caches and TLB. I will discuss
an example that motivated us to improve the accuracy of the model
by adding register allocation.
Bio:
Elizabeth
Jessup's research concerns the development of efficient
algorithms and software for matrix algebra problems. This work began
with the development of innovative memory-efficient algorithms and,
more recently, has moved toward tools to aid in programming of matrix
algebra software. Dr. Jessup has recently been collaborating with experts in
compiler technology, focusing on compilers that create fast numerical
software. Their initial focus has been on making efficient use of the
memory hierarchy on a single processor but they are moving into
multicore and GPU implementations. She is also interested in usability
of scientific software. To that end, Dr. Jessup is working with collaborators
on a tool to automate the construction of numerical software. Given a
problem specification, the tool will find and tune appropriate
routines for its solution.
Dr. Jessup was co-developer of an
award-winning, NSF-funded undergraduate curriculum in high-performance
scientific computing and have continued to work on innovative
approaches to education in her field. She has also conducted research on
factors that influence women's interest in computer science.
Date: Tuesday, February 21, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Tom Hayes
University of New Mexico
Department of Computer Science
We have all had the experience of waiting in a line before getting our turn to
do something. I will talk about some simple algorithms involving lining up, and their sometimes surprising behavior.
Bio:
Tom Hayes
is an assistant professor at the University of New Mexico in the Department
of Computer Science. Broadly speaking, he is interested in Theoretical Computer
Science and Machine Learning. Some of his particular interests include: convergence rates for Markov chains, sampling algorithms for random combinatorial structures, and online decision-making algorithms.
Date: Thursday, February 16, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Patrick Bridges
University of New Mexico
Department of Computer Science
Modern systems are becoming increasingly challenging to fully leverage, especially but not exclusively at the system software level, with parallelism and reliability becoming major challenges. Current programming techniques do not address these challenges well, relying either on complex synchronization that is hard to understand, debug, analyze, and optimize, or forcing almost complete separation between cores. In this talk, I will present a new approach to programming system software for modern machines that leverages replication and redundancy to extract performance from multi-core hardware. In addition, its use of replication as a key structuring element has the potential to provide for a more reliable system that is robust in the face of failure. I describe the approach overall, discuss its novel features, advantages, and challenges, present performance numbers from work applying this approach in the context of a network protocol stack implementation, and discuss potential directions for future work.
Bio:
Patrick Bridges
is an associate professor at the University of New Mexico in the Department
of Computer Science. He did his undergraduate work at Mississippi State
University and received his Ph.D. from the University of Arizona in December of
2002. His research interest broadly cover operating systems and networks
particularly, scaling, composition, and adaptation issues in large-scale
systems. He works with collaborators at Sandia, Los Alamos, and Lawrence
Berkeley National Laboratories, IBM Research, AT&T Research, and a variety of
universities.
Date: Thursday, February 9, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Michalis Faloutsos
University of California, Riverside
In this talk, we highlight two topics on security from our lab. First, we address the problem of Internet traffic classification (e.g. web, filesharing, or botnet?). We present a fundamentally different approach to classifying traffic that studies the network wide behavior by modeling the interactions of users as a graph. By contrast, most previous approaches use statistics such as packet sizes and inter-packet delays. We show how our approach gives rise to novel and powerful ways to: (a) visualize the traffic, (b) model the behavior of applications, and (c) detect abnormalities and attacks. Extending this approach, we develop ENTELECHEIA, a botnet-detection method. Tests with real data suggests that our graph-based approach is very promising.
Second, we present, MyPageKeeper, a security Facebook app, with 13K downloads, which we deployed to: (a) quantify the presence of malware on Facebook, and (b) protect end-users. We designed MyPageKeeper in a way that strikes the balance between accuracy and scalability. Our initial results are scary and interesting: (a) malware is widespread, with 49% of our users are exposed to at least one malicious post from a friend, and (b) roughly 74% of all malicious posts contain links that point back to Facebook, and thus would evade any of the current web-based filtering approaches.
Bio:
Michalis Faloutsos
is a faculty member at the Computer Science Dept. at the University of
California, Riverside. He got his bachelor's degree at the National Technical
University of Athens and his M.Sc and Ph.D. at the University of Toronto. His
interests include, Internet protocols and measurements, peer-to-peer networks,
network security, BGP routing, and ad-hoc networks. With his two brothers, he
co-authored the paper on power-laws of the Internet topology, which received
the ACM SIGCOMM Test of Time award. His work has been supported by many NSF
and military grants, for a cumulative total of more than $6 million. Several
recent works have been widely cited in popular printed and electronic press
such as slashdot, ACM Electronic News, USA Today, and Wired. Most recently he
has focused on the classification of traffic and web-security, and co-founded a
cyber-security company founded in 2008, offering services as www.stopthehacker.com, which received two SBIR grants from the National Science Foundation, and institutional funding in Dec 2011.
Date: Thursday, February 2, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Joseph R. Barr
Chief Scientist at ID Analytics
Part 1:
ID Analytics main business is scoring applications (for credit/services) for risks including identity/
authenticity & credit. By definition an application is a vector of identity elements (SSN, Name,
Address, Phone, DOB, <more>), a vector known as .SNAPD., as well as additional fields. ID
Analytics process the data, extract pertinent features and calculate risk score on the fly. The
entire process has a sub-second latency. At the basis of our analytics is the ID
Network – a virtual graph with SNAPD-vectors as nodes. One can envision making a connection between
two nodes if they share some identity element. The weight of the edge is the strength of the
connection. As one can imagine various graphical parameters are the predominant inputs to
our risk models. At the time I write this, the ID network has 1.5 billion nodes (corresponding
to number of transactions); this of course means that the graph is too large to be stored in
memory, and needless to say, how we do it is a trade secret, but I will indicate some principles
behind the ideas.
Part 2:
The risk ID Analytics is scoring falls under the more general rubric of consumer behavior.
We are interested in the spatial / temporal aspects of our network and how it related to
macroeconomic and social data including demographics, geography, housing, census, interest
rates, unemployment, federal deficit, foreign balance of trade and whatnot. Under certain
conditions, we will avail our data to an outside organization to participate in publishable
research.
Introducing id: a labs, a research-oriented organization which promotes collaborations with
academia and other research institutions.
Bio:
Joseph R. Barr
is the Chief Scientist at ID Analytics (www.idanalytics.com). After a few years in
academia (as Math/CS Assistant Professor at California Lutheran University,) he has spent the
past 17 years in industry as a risk & consumer behavior (analytics) professional. He was awarded
a Ph.D. in mathematics from the University of New Mexico on his work on graph colorings,
under the direction of Professor Roger C. Entringer. His current interests include the application
of statistics, machine-learning and combinatorial algorithms to risk management and consumer
behavior. Joe is married, has two young children, a boy and a girl, and an older son, a software
engineer at Intel.
Date: Thursday, January 26, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Kathryn Mohror
Lawrence Livermore National Lab
Applications running on high-performance computing systems can
encounter mean times between failures on the order of hours or days.
Commonly, applications tolerate failures by periodically saving their
state to checkpoint files on reliable storage, typically a parallel
file system. Writing these checkpoints can be expensive at large
scale, taking tens of minutes to complete. To address this problem, we
developed the Scalable Checkpoint/Restart library (SCR). SCR is a
multi-level checkpointing library; it checkpoints to storage on the
compute nodes in addition to the parallel file system. Through
experiments and modeling, we show that multi-level checkpointing
benefits existing systems, and we find that the benefits increase on
larger systems. In particular, we developed low-cost checkpoint
schemes that are 100x-1000x faster than the parallel file system and
effective against 85% of our system failures. Our approach improves
machine efficiency up to 35%, while reducing the load on the parallel
file system by a factor of two.
Bio:
Kathryn Mohror
is a Postdoctoral Research Staff Member at the Center
for Applied Scientific Computing (CASC) at Lawrence Livermore National
Laboratory. Kathryn.s research on high-end computing systems is
currently focused on scalable fault tolerant computing and performance
measurement and analysis. Her other research interests include
scalable automated performance analysis and tuning, parallel file
systems, and parallel programming paradigms.
Kathryn received her Ph.D. in Computer Science in 2010, an M.S. in
Computer Science in 2004, and a B.S. in Chemistry in 1999 from
Portland State University in Portland, OR.
Date: Thursday, January 19, 2012
Time: 11:00 am — 12:15 pm
Place: Mechanical Engineering 218
Terran Lane
UNM Department of Computer Science
Many modern scientific phenomena are best described in terms of graphs. From social networks to brain activity networks to genetic
networks to information networks, attention is increasingly shifting to data that describe or originate in graph structures. But because
of nonlinearities and statistical dependencies in graphical data, most "traditional" statistical methods are not well suited to such data.
Coupled with the explosion of raw data, stemming from revolutions inscientific measurement equipment, domain scientists are facing steep
challenges in statistical inference and data mining.
In this talk, I will describe work that my group has been doing on the identification of graph structure from indirect data. This problem is
very familiar to the machine learning community, where it is known to be both computationally and statistically challenging, but has
received substantially less attention in a number of scientific communities, where it is of substantial practical interest. I will
examine an approach to graph structure inference that roots into the topology of graph structure space. By imposing metric structure on
this otherwise unstructured set, we can develop fast, efficient, accurate inference mechanisms. I will explain our approach and
illustrate the core idea and variants with examples drawn from neuroscience and genomics and introduce recent results on malware
identification.
Bio:
Terran Lane
is an associate professor of computer science at UNM. His personal research
interests include behavioral modeling and learning to act/behave (reinforcement learning), scalability, representation, and the tradeoff between
stochastic and deterministic modeling. All of these represent different facets
of his overall interest in scaling learning methods to large, complex spaces and
using them to learn to perform lengthy, complicated tasks and to generalize over
behaviors. While he attempts to understand the core learning issues involved, he
often situates his work in domain studies in practical problems. Doing so both elucidates important issues and problems for the learning community and provides useful techniques to other disciplines.
Colloquia Archives