Connection to the Research

Award Recipients

Previous Years

Amogh Dhamdhere, University of California, San Diego
Volume-based Transit Pricing: Which Percentile is Right?

Internet Service Providers (ISPs) typically use volume-based pricing to charge their transit customers. Over the years, the industry standard for volume-based pricing has been the 95th-percentile pricing method: the 95th percentile of traffic volume across 5-minute samples is used to calculate the total transit payments billed to the customer. This method provides an estimate of the maximum traffic rate that the customer could send -- useful for capacity planning in the provider's network -- while also allowing the customer to send transient bursts of traffic in a few 5-minute intervals.

But the method is, by design, sensitive to changes in traffic composition and characteristics, and we have seen dramatic changes in both over the last decade. The evolution of such a diverse and dynamic ecosystem suggests the need for an empirical and computational exploration of the effectiveness of the 95th percentile charging approach on provider cost recovery, and more specifically how varying the charging percentile across peers would influence this effectiveness.

We propose two related research tasks. We will first perform a historical study of traffic patterns between neighboring networks, to understand the relation between the 95th percentile and other measures of the traffic distribution (e.g., median, mean, or peak), and how trends in these parameters influence the economics of 95th percentile pricing. Second, we propose to study the use of the same 95th percentile charging on a per-customer basis. We hypothesize that the appropriate charging percentile depends on the volume distribution of the traffic sent by a particular customer network, which in turn depends on the business type of that network, its size, and user base. We propose to develop an algorithm by which a provider can select the charging percentile on a per customer basis to optimize its cost recovery. A provider could use this optimization framework to selectively offer discounts, i.e., a lower percentile than the 95th, either to attract new customers or retain existing customers.

The provider could also use this framework to price its service in a way that more accurately reflects the costs it incurs. The same computational framework is applicable to the study of emerging volume-based pricing schemes for residential customers, as well as to explore the debate over network neutrality and broader questions of appropriate business models for Internet data transport.

Dawn Song, University of California, Berkeley
Automatic Protocol State-Machine Inference for Protocol-level Network Anomaly Detection

Next-generation network anomaly detection systems aim to detect novel attacks and anomalies with high accuracy, armed with deep stateful knowledge of protocols in the form of protocol specifications. However, protocol specifications are currently mainly crafted by hand, which is both expensive and error-prone.

We propose a new technique to automatically infer protocol specifications given one or more program binaries implementing the protocol.

Our technique is a synergistic combination of symbolic reasoning on program binaries and active learning techniques. In contrast to existing techniques, our proposed technique can automatically infer complete protocol specifications with no missing transitions, enabling anomaly detection with high accuracy.

David Wetherall, University of Washington
Web Privacy Using IPv6 and the Cloud

We propose to target Web usage and develop a privacy architecture that spans both network and application layers, as is needed for privacy in practice. Our key idea is to leverage the large IPv6 address space to separate user activities all the way from different profiles within the browser to different IP addresses within the network, and to leverage the cloud for low-latency IP address mixing. This will allow users to create separate browsing activities on a single computer that can't be linked by third parties. We believe that these ideas can greatly raise everyday privacy levels while keeping forwarding efficient.

Ingolf Krueger, UC San Diego
Application Aware Security in the Data Center

The complexity of today's distributed systems, as well as their dynamic nature, has rendered securing distributed applications increasingly difficult. We believe that overcoming these difficulties requires a global view of the distributed system. Such global view enables detecting anomalies in the orchestrated behavior of a distributed application from the network. Currently, network security devices such as firewalls and intrusion detection systems target network security, by observing only the well-defined network protocol signatures, or performing coarse-grained anomaly detection based on general traffic characteristics. These systems provide minimal application-level security, because they are not aware of the logic of the custom applications deployed over the network. We propose enhancing the network to offer more application-aware services. In particular, we propose to investigate opportunities for enabling application-aware network security inside the data center to detect attacks and anomalies. As a result of this research, we expect to be able to deploy monitors in the network that provide a distributed application-aware security service. This will enable system and network administrators to detect and isolate security breaches and anomalies in a timely manner.

Eby G. Friedman, University of Rochester
3-D Test Circuit on Power Delivery

The effective operation of 3-D circuits strongly depends upon the quality of the power delivered to the system, specifically power generation and distribution. There are two primary challenges to the development of an effective power generation and distribution system. The first challenge is to generate and distribute different and multiple supply voltages since diverse circuits and technologies require different voltage levels. The number and variety of these voltage levels within a heterogeneous system can be significant in modern multi-voltage systems. The second challenge is to guarantee signal integrity among these circuit blocks. We are proposing a test circuit composed of a series of test structures to examine several aspects of both power generation and power delivery in 3-D integrated systems. This test circuit will investigate and quantify fabrication, thermal, memory systems, signal integrity, and power integrity issues in 3-D integrated circuits. The test structures will provide insight into 3-D IC design, manufacturing, and test. Based on results from the test circuit, models for signal and power integrity, thermal effects, and reliability will be developed and calibrated. The test circuit will be composed of six types of test structures, where the first test structure examines power generation in 3-D structures, the second test structure investigates power distribution in 3-D structures, the third test structure focuses on interface circuits for interplane power delivery, the fourth test circuit examines memory stacking, the fifth test structure examines the effect of thermal cycling on TSVs, and the final test structure will characterize the electrical properties of through silicon vias (TSV).

Krishnendu Chakrabarty, Duke University
Advanced Testing and Design-for-Testability Solutions for 3D Stacked Integrated Circuits

Since testing remains a major obstacle that hinders the adoption of 3D integration, test challenges for 3D SICs must be addressed before high-volume production of 3D SICs can be practical. Breakthroughs in test technology will allow higher levels of silicon integration, fewer defect escapes, and commercial exploitation.

This three-year research project is among the first to study 3D SIC testing in a systematic and holistic manner. Existing test solutions will be leveraged and extended wherever possible, and new solutions will be explored for test challenges that are unique to 3D SICs. A consequence of 3D integration is that there is a natural transition from two "test insertions" (wafer sort and package test) to three or more "test insertions", where additional test insertions correspond to the testing of the stack after assembly?the stack may need to be tested multiple times depending on the number of layers, test cost, and the assembly process; we call it known good stack (KGS) test. Therefore, there is an even greater emphasis on the reuse of DFT infrastructure to justify the added cost and to ensure adequate ROI.

The proposed research includes test content development, test cost modeling, and built-in self-test (BIST)

Timothy G. Griffin, University of Cambridge
IGPs with expressive policy

Link-state routing protocols are often desirable because of their relatively fast convergence.

This results from each router using an efficient path computation, such as Dijkstra's algorithm, rather than the slower "path exploration" method associated with all distributed Bellman-Ford algorithms. On the other hand, protocols in the the Bellman-Ford family, --- such as the Border Gateway Protocol (BGP) --- seem to allow more expressive routing policies. Recent results (Sobrinho and Griffin) show that Dijkstra's algorithm can in fact be used to compute solutions in many policy-rich settings. However, such solutions are associated with {\em locally optimal paths} rather than {\em globally optimal paths}, and in some cases tunneling is required to avoid forwarding loops. These results could provide the foundation for a new class of intra-domain link-state protocols for networks that require expressive routing policies. This research project intends to initiate an exploration of this space.

Bharat Bhuva, Vanderbilt University
Silicon latch up reliability for deep sub-micron technologies

Silicon reliability has steadily increased over time for latch up. However, most research in this area has been done for nominal electrical operating conditions. With the close proximity of different regions on an IC for advanced technology nodes, single-event latch up vulnerability has increased by orders of magnitude. Characterization of this reliability issue is very costly and inefficient using traditional means. This proposal is for the investigation of Silicon latch reliability using heavy-ion sources. Results from this project will allow better understanding of single-event latch up and will facilitate development of mitigation strategies.

Edward W. Knightly, William Marsh Rice University
Joint Traffic and Spectrum Management from MHz to GHz

UHF bands are the undisputed "beach front property of spectrum" due to their capability to provide kilometer-scale links without requiring line-of-sight communication and with moderate-sized antennas. On the other hand, GHz bands are the "autobahn of spectrum" enabling transmission rates of Gb/sec and beyond. Joint use of these diverse bands yields an unprecedented opportunity for performance and network capabilities. In this project, we will explore protocols and tools for accessing the right spectrum for the right traffic. We will deepen our understanding of UHF-band wireless networking through deployment and war driving and will perform an early trial for urban-scale measurements and protocol development that jointly utilizes unused UHF bands as well as legacy 900 MHz, 2.4 GHz, and 5 GHz bands.

Henning Schulzrinne, Columbia University
An Independent Study of Mobility and Security in a Fully Integrated Heterogeneous Network

Today, the central mobility management question is a lot more than just "mobile IP vs. SIP mobility". Unlike during the initial phases of mobile wireless data services from the 1990s, today a mobile device can be simultaneously connected to several different heterogeneous wireless and wireline technologies. The industry at large and in particular carriers, enterprise, and equipment manufacturers are keenly interested in an independent study to be carried out by a major university in order to determine the essential elements of this new mobility paradigm, with a particular emphasis on the security aspects. We would like to perform a study comprised of a survey of available tools and thence determine what protocol elements are still missing in order to facilitate delivery of these services across an evolving heterogeneous network. A prototype of the new paradigm for demonstration and experimental measurements, in a customized testbed, is an expected outcome, as well as publication of original results in respected academic journals. The participation of a scientist/technologist from the Carrier environment is fundamental in gearing the technology findings to a direct usability of the product for Cisco customers. The ultimate objective is developing an infrastructure to provide the carrier customer with a seamless mobile user experience in a secure manner over a virtualized network.

Bernd Tillack, Technical University of Berlin - Department of High-Frequency and Semiconductor System Technologies
Investigation of SiGe HBT Ageing and its Influence on the Performance of RF Circuits

In this project the ageing of SiGe hetero bipolar transistors (HBT) will be studied under so-called mixed-mode stress conditions (high-current and high-voltage stress beyond the emitter-collector breakdown voltage). The observed HBT degradation by hot-carrier injection will then be used to develop an "ageing function". This function models the increase of the base current and in return will be used to predict the long-time transistor degradation. Correspondingly, stress-adapted HBT compact models will be incorporated into appropriate radio-frequency circuits to simulate the influence of HBT ageing on circuit performance.

Scope and content of the work are appropriate for the master thesis of a graduate student of electrical engineering.

Haitao Zheng, UC Santa Barbara
3D Wireless Beamforming for Data Center Networks

Contrary to prior assumptions, recent measurements show that data center traffic is not constrained by network bisection bandwidth, but is instead prone to congestion loss caused by short traffic bursts. Compared to the cost and complexity of modifying data center architectures, a much more attractive option is to augment wired links with flexible wireless links in the 60 GHz band. Current 60GHz proposals, however, are severely constrained by link blockage and radio interference that severely limit wireless transmissions in dense data centers.

In this project, we propose to investigate a new wireless primitive for data centers, 3D beamforming, where a top-of-rack directional antenna forms a wireless link by reflecting a focused beam off the ceiling towards the receiver. This reduces the link's interference footprint, avoids blocking obstacles, and provides an indirect line-of-sight path for reliable communication. Under near-ideal configurations, 3D beamforming extends the effective range of each link, allowing our system to connect any two racks using a single hop, and mitigating the need for multihop links. While still in an exploratory stage, 3D beamforming links could potentially provide a highly flexible routing layer that not only reduces unpredictable, sporadic congestion, but also provides an orthogonal control layer for software-defined networking.

Ioannis Tomkos, Research and Education Laboratory in Information Technologies
Impact of flexible DWDM networking on the design of the IP+DWDM network

As IP traffic demand displays both topological disparity and temporal disparity, the need arises to cost-efficiently address these effects in a scalable manner. Emerging technologies in optical communication systems are enabling dynamic bit-rate adaptation and addressing inherent limitations of rigid fixed-grid single carrier WDM networks. In fixed-grid networks, a fixed amount of spectrum is assigned per connection without considering requirements in terms of capacity and transparent optical reach. This results in great inefficiencies. From a capacity perspective, it leads to either under-utilized connections or to the introduction of an extra layer (OTN) to achieve a higher utilization. From a reach perspective, either an over-engineered short reach connection is assigned or the deployment of expensive optical-electrical-optical regenerators becomes a necessity in order to achieve longer reach. Thus, the need arises to introduce alternative technologies enabling the flexible assignment of spectrum. To this end "FlexSpectrum" architectures are proposed, which allow to tradeoff the capacity and the reach with the utilized spectrum per connection. These architectures utilize recent advances in optical orthogonal frequency division multiplexing and Nyquist-WDM, which have set the stage for envisioning fully flexible optical networks that are capable of dynamically adapting to the requirements of each connection. Hence, so called "superchannels" are formed - consisting of a set of tightly spaced optical subcarriers. The modulation level, the symbol rate, and the ratio of FEC and payload can all be adjusted according to the required bit-rate and reach. However, the setting of these different degrees of freedom is not straightforward, as they are to a great degree intertwined. For example, a higher bit-rate can be achieved by moving to higher multi-level modulation formats (such as PM-QPSK) - but this comes at the expense of reduced reach. A higher reach can be achieved by increasing the symbol rate - but this translates into greater spectrum requirements. By increasing the percentage of FEC, a greater reach can be achieved without increasing the utilized spectrum. This, however, comes at the expense of a reduced payload (i.e. reduced bit-rate). It is hence clear that multiple tradeoffs are occurring when trying to meet the requirements at the connection level.

However, while on a static connection level the benefits in terms of spectrum savings of FlexSpectrum architectures are evident, new constraints and challenges are introduced when examining a realistic network-wide scenario. Especially when the justification of deploying new technologies is directly related to economic benefits, more holistic techno-economic modeling and studies are key to this analysis. As the introduction of new capabilities imposes requirements on the network equipment, additional costs are introduced in the network. For example, the introduction of a FlexSpectrum sliceable transceiver, which can be effectively sliced into multiple "virtual tranceivers" each accommodating different connections, incurs more costs as its degree of flexibility increases. Additionally, more complicated and cost-intensive optical switches (ROADMs) may be required to be deployed. Considering the introduced complexity in flexible networking scenarios, we plan to conduct extensive techno-economic analysis in different network settings in order to identify potential benefits and occurring trade-offs. In order to address the issue that not yet commercially available network equipment will be considered, the sensitivity to variations of such costs will be examined. Additionally, the "break-even" point when comparing alternative scenarios will be identified.

Current research results in flexible optical networking have focused on the optical layer - addressing mostly technological aspects of the physical layer and savings in terms of spectrum. In contrast to current research results, we consider both the optical and the IP layer. We plan to use an industry leading tool (MATE) for proper modeling of the IP layer behavior. We expect that the additional consideration of the IP layer, will lead to highly relevant and interesting results - as a holistic and realistic view of the overall benefits of flexible optical networking will be examined. Basic assumptions will be challenged - as in "O. Gerstel, et al., Increased IP Layer Protection Bandwidth Due to Router Bypass, ECOC 2010", where the assumption that protection bandwidth in routers is independent of the amount of bypass is discredited. While a more meshed network of small pipes may appear beneficiary from a pure optical perspective, it may incur extra costs if the IP layer is considered. To the best of our knowledge no previous work exists that jointly considers the interaction of the IP and the flexible optical layer, offering insight into the overall incurring costs - especially under failure events in the IP layer.

In the following, we highlight the main points of our proposal. As multiple degrees of freedom become available, the question as to what the actual flexibility needs are arises. Considering that transparent reach requirements are critical to the selection of the transmission technology characteristics, we aim to identify a relationship between the degree of flexibility and the considered physical network topology. We go one step further and examine the design of the IP topology itself. Different strategies to design IP topologies can be developed, leading to a different degree of connectivity between the routers. This translates into a different traffic demand for the optical layer, which in turn affects the transparent reach and capacity requirements. Hence, we aim to examine the circumstances under which additional flexibility yields improvement in terms of performance (e.g. utilized spectrum, CAPEX) in a multi-layer setting by performing techno-economic analysis.

Douglas Comer, Purdue University
Increasing Reliability Of High-End Routers Through Self-Checks By AnIndependent Observer

High reliability is a key requirement for tier-1 service providers. Because it handles traffic at the core of the Internet, a tier-1 network operates continuously and experiences steady traffic.

To handle high traffic loads, routers used in tier-1 networks employ multiple control processors distributed among multiple line cards in multiple chassis. Thus, the control plane within such a router can be viewed as a distributed system. To achieve high reliability, the control processes must employ techniques from parallel and distributed computing which will permit them to coordinate the various control functions so the overall system operates as single, unified entity.We propose to investigate a technique for increasing reliability in such systems: the use of an outside observer that can monitor the operation and verify that the outcomes are aligned with expectations.

Ramana Rao Kompella, Purdue University
Increasing Reliability Of High-End Routers Through Self-Checks By AnIndependent ObserverTowards Better Modeling of Communication Activity Dynamics in Large-Scale Online Social Networks

Online social networks (OSNs) have witnessed tremendous growth and popularity over the recent years.  The huge success and increasing popularity of social networks makes it important to characterize and study their behavior in detail.

Recent work in analyzing online social network data has focused primarily on either static social network structure (e.g., a fixed network of friendship links) or evolving social networks (e.g., a network where friendship links are added over time).  However, popular OSNs sites provide mechanisms to form and maintain community over time by facilitating communication (e.g., Wall postings, e-mail), content sharing (e.g., photographs, videos, links), and other forms of activities. Analyzing social activity networks formed by observing who engages with whom in a particular activity is an important area of research that has received very little attention to date.

This goal of this project is to develop methods to support the analysis and modeling of activity networks. We propose to address several weaknesses of previous related work with respect to this goal: (1) Although researchers are beginning to focus on activity networks, most studies have focused primarily on friendship networks and as such, activity networks have not been studied as well. (2) Sampling approaches have either focused on static graphs or graphs that are fully observable, neither of which is appropriate for large-scale streaming data common in activity networks. (3) Evaluation of sampling algorithms has focused on the accuracy of graph metric estimation---there has been little work on developing a theoretical framework for the task and on understanding the theoretical tradeoffs between preserving various properties of the graph. (4) Research on network evolution models has focused on the underlying network structure (i.e., friendships) rather than on activity traffic that is overlaid on the existing network structure. In this project, we propose to address the afore-mentioned issues by developing a suite of algorithmic and analytic methods to support the characterization and modeling of activity networks. In particular, our key contributions are : (1) We will conduct extensive static and temporal characterization studies of social network activity, comparing the observed properties to those of the underlying friendship network.  (2) We will study sampling techniques that can preserve graph properties for different communication activity graphs. (3) We will investigate the fundamental theoretical trade-offs between preserving different properties of the graph and investigate different types of sampling errors to either adjust or correct for bias.  (4) We will develop procedural modeling techniques to generate social network activity graphs conditioned on the underlying network structure and profile information to better represent the temporal dynamics and burstiness of activity patterns.

Frank Mueller, NC State University
A Benchmark Suite to Assess Soft Routing Capabilities of Advanced Architectures

Advanced architectures provide novel opportunities to replace costly ASICs currently used for routers. Such commodity architectures allow routing to be performed in software. The objective of this work is to define metrics and create a benchmark suite that automatically derives quantitative measurements to allow different architectures to be compared as to their suitability for soft routing. The goal of this project is to identify suitable metrics, develop test kernels and package the resulting software so that measurements can be easily obtained on generic architectures. Our vision is that this benchmark suite be widely utilized not only by Cisco but also by hardware vendors to further the capabilities of their architectures by providing specific ISA support for soft routing. Such a development would result in a competitive marketplace for architectural support of soft routing and could significantly contribute to cut costs for future routing development.

Tony O'Driscoll, Duke University
Unified Communications Interfaces for Immersive Enterprise Collaboation

This project seeks to make a fundamental contribution to the field of human centered computing by exploring new interface approaches for managing advanced unified communications services within 3D virtual workspaces intended to support group-based learning and problem solving among cohorts within a university course setting.

We will integrate with external web, media, and Cisco collaboration services with immersive 3D workspace technologies to enable a well-provisioned three-dimensional wiki-space with an advanced unified communication/presence capabilities that will be tested in a live higher-educational setting.  We will explore how best to enable and optimize the virtual workspace user interface to best leverage unified communication/presence in support of group problem solving.

We will also evaluate the efficacy of different 3D user interface configurations against existing 2D unified communications/presence tools.  User interface configurations will be tested by teams of faculty and students within Duke's Cross Continent MBA (CCMBA) program at the Fuqua School of Business.We propose to deploy and evaluate a unified communications/presence-enabled 3D virtual workspace collaboration context by:

* Establishing a 3D virtual workspace-integrated unified communications/presence service so that all Cisco presence and chat services are extended to within virtual workspaces

* Developing a virtual workspace activity feed into Cisco's Quad product via a web services API so that information about defined actions within virtual workspaces are incorporated into the Quad activity stream

* Integrating video-based communication services into, and from, virtual workspace contexts to allow integration of multiple media channels into a single virtual context

* Establishing a means of data mining and reporting on user activities within virtual workspaces so that relevant tag clouds are associated with each user's avatar and to enable robust evaluation of the environment's use

* Field-testing various user interface configurations by delivering 3D collaboration contexts with integrated communication and presence services in support of team-based communication/collaboration within the Cross Continent MBA (CCMBA) program at Duke's Fuqua School of Business.

Nikil Jayant, Georgia Tech
Video Coding Technologies for Error-resiliency and Bit-rate Adaptation

This research proposal follows previous and on-going work at Georgia Tech in the area by Nikil Jayant and team. In particular, it draws upon (a) research on video streaming as performed under a Cisco RFP in 2008 and (b) research on three-screen television and a home gateway (2009-2010).

As in previous work, the proposed research includes an end-to-end view of bandwidth and visual quality. It includes the effect of compression artifacts, as introduced by video encoding and incomplete data reception, and measurement of received video quality in a subjectively meaningful way in response to the following two conditions:

  1. Adaptation to bit-rate variations as a result of changes in network bandwidth, and
  2. Incomplete video reception due to network impairments, such as burst errors.

The overall goal is to maximize perceived video quality under constraints of total bandwidth (bit rate) under these two conditions.

John William Atwood, Concordia University
Automated Keying, Authentication and Adjacency Management for Routing Protocol Security

Routers are constantly exchanging routing information with their neighbors, which permits them to achieve their primary task of forwarding packets.  To help ensure the validity of the routing information that is exchanged, it is important to authenticate the sender of that information.  Although some routing protocol specifications exist that mandate a "security option" that defines how to protect the information to be exchanged, all these specifications consider only manual key management, i.e., there is no defined procedure for automatically managing the keys used for these exchanges, or for ensuring the authenticity of the discovered neighbors.  Automatic procedures are essential if the provided security is to be effective over the long term, and easy to manage.

The problem of managing the keying and authentication for routing protocols can be cast as a group management problem with some unique requirements: potentially a very large number of small groups, robustness across re-boots, and effectiveness when unicast routing is not yet available.

We propose to design, implement, and test a distributed, automated key management and adjacency management system, for two example protocols: OSPF (a widely-deployed unicast routing protocol) and PIM-SM (the only multicast routing protocol with significant deployment).  We will use both hardware routers (CISCO) and software routers (XORP) in our implementations.  The resulting system will provide a proof-of-concept for future automated systems.

In addition, the exchanges among the components of the distributed management system will be formally validated using AVISPA, to ensure that the necessary security properties are maintained.

Chunming Qiao, SUNY Buffalo
Basic Research in Human Factors aware Cyber-Transportation Systems

A cyber transportation system (CTS), which applies the latest technologies in cyber and vehicular systems and transportation engineering, can be critically useful to our society, as it can reduce accidents, congestion, pollution, and energy consumption. As the effectiveness and efficiency of CTS will largely depend on how human drivers could benefit from and respond to such a system, this project proposes multi-disciplinary research which, for the first time, takes human factors (HF) into account when designing and evaluating new CTS tools and applications to improve both traffic safety and traffic operations. The proposed research will focus on novel HF-aware algorithms and protocols for prioritization and address routing/scheduling and fusion of various CTS messages, so as to reduce drivers' response time and workload, prevent duplicate and conflicting warnings, and ultimately improve safety and comfort. An integrated traffic and network simulator capable of evaluating these and other CTS algorithms and applications will also be developed and used to complement other ongoing experimental activities at Cisco and elsewhere.

Li Chen, University of Saskatchewan
Investigate the root cause of Intersil DC-DC convertor failure using pulsed laser

One end customer of Cisco experienced unusual high field failure rate with network linecards in a wafer fab. The root cause was traced down to a DC-DC converter supplied by Intersil Inc. In order to further locate the root cause in the converter, pulsed laser experiments are proposed to continue the investigation. The objective of the project is to use the pulsed laser facility at University of Saskatchewan to scan the DC-DC converter to identify the most sensitive devices in term of laser energy. The most sensitive devices should be the root cause of the failure observed in the field tests. We aim to publish the research results in the related conferences and journals.

Andrew Lippman, MIT Media Laboratory
Viral Transaction

We propose a focused, one-year research effort to address the architectures and applications of "Proximal networking."  We explore this in the context of transactions, which we define as instances of doing business or engaging in a negotiation of some kind in a place, with a purpose, and with people or institutions that are nearby.  Unlike pure "cyberspace" explorations, this is a realization of a means for advertising, enabling, and adding programmability to the transactions that we engage in during normal life, in real places, with real people.  Also, unlike social networks that assume an a priori connection between people realized through communications, our networks are space-dependent, time-varying, and opportunistic. 

Keith E. Williams, Rutgers, The State University of New Jersey
Infrastructure Improvements for Extracellular Matrix Research: A Computational Design

The extracellular matrix (ECM) is a complex network of collagens, laminins, fibronectins, and proteoglycans that provides a surface upon which cells can adhere, differentiate, and proliferate. Defects in the ECM are the underlying cause of a wide spectrum of diseases.  We are constructing artificial, de novo, collagen-based matrices using a hierarchic computational approach, employing a protCAD (protein Computer Automated Design) - a software platform developed in our laboratory specifically for computational protein design.  This will provide a powerful system for studying molecular aspects of the matrix biology of cancer.  Successfully designed matrices will be applied to engineering safer artificial tissues.  High-end computer networking is essential to the research proposal, as the migration of large data sets require robust network infrastructure and hardware components for various laboratory, office, and equipment locations throughout our facility.  To that end, we propose to replace our core end-of-life Catalyst 3548 100 megabits per second (Mbps) Layer 2 switches with hardware capable of 1 and 10 gigabit per second (Gbps) client-end throughput, thus increasing data clustering ten- and one hundred-fold, respectively.

Kishor Trivedi, Duke University
Availability, Performance and Cost Analysis of Cloud Systems

The goal of this project is to develop a general analytic modeling approach for the joint analysis of availability, performance, energy usage and cost applicable to various types of cloud services (Software as a Service, Platform as a Service, and Infrastructure as a Service) and cloud deployment models (private, public and hybrid clouds). Multi-level interacting stochastic process models that are capable of capturing all the relevant details of the workload, faultload and system hardware/software/management aspects will be developed to gain fidelity and yet retain tractability. Such scalable and accurate models will be used for bottleneck detection, capacity planning, SLA management, energy consumption handling, and overall optimization of cloud services/systems and of underlying virtualized data centers.

Fethi Belkhouche, University Enterprises, Inc. (on behalf of California State University, Sacramento)
Predictive Maintenance in Power Systems

This project seeks to improve fault tolerance and reliability in smart grid power systems. The approach is based on predictive maintenance to predict possible failures. This project explores new solutions from control theory and artificial intelligence, such as Bayesian prediction and machine learning. Algorithms are integrated with embedded smart sensors. Distributed computation with embedded processors and communication devices are used to predict faults and make online corrections when necessary.

Farshid Delgosha, New York Institute of Technology
Alternate Public Key Cryptosystems

We propose a novel algebraic approach for the design of multivariate public-key cryptosystems using paraunitary matrices over finite fields. These matrices are a special class of invertible matrices with polynomial entries. Our motivations for using such matrices are the existence of fully-parameterized building blocks that generate these matrices and the fact that the inverse of any paraunitary matrix is obtained with no computational cost. Using paraunitary matrices, we design a public-key cryptosystem and a digital signature scheme. The public information in the proposed schemes consists of a number of multivariate polynomials over a finite field. The size of the underlying finite field must be small to ensure the efficiency of arithmetic over the field and consequently cryptographic operations. The link between the security of these schemes and the difficulty of a computational problem is established using mathematical evidences. The underlying computational problem, which is considered to be hard, is solving systems of multivariate polynomial equations over finite fields. We have studied instances of the proposed techniques over the finite field GF(256). Our theoretical analyses have shown that the proposed instances are extremely efficient in terms of encryption, decryption, signature generation, and signature verification.

Robin Sommer, International Computer Science Institute
A Concurrency Model for Deep Stateful Network Security Monitoring

Historically, parallelization of intrusion detection/prevention analysis has been confined to fast string-matching and coarse-grained load-balancing, with little fine-grained communication between the analysis units. These approaches buy some initial speed-ups, but for more sophisticated forms of analysis, Amdahl's Law limits the performance gains we can readily attain. We propose to design and implement a more flexible concurrency model for network security monitoring that instead enables automatic and scalable parallelization of complex, highly-stateful traffic analysis tasks. The starting point for our work will be a parallelization approach exploiting the observation that such tasks tend to rely on a modest set of programmatical idioms that we can parallelize well with respect to the analysis units they are expressed in. Leveraging this observation, we will build a scheduler that splits the overall analysis into small independent segments and then identifies temporal and spatial relationships among these segments to schedule them across a set of available threads while minimizing inter-thread communication. We will implement such a scheduler as part of a novel high-level abstract machine model that we are currently developing as an execution environment specifically tailored to the domain of high-performance network traffic analysis.  This model directly supports the field's common abstractions and idioms, and the integrated scheduler will thus be able to exploit the captured domain knowledge for parallelizing analyses much more effectively than standard threading models can. We believe that the work we propose fits well with Cisco's mission to provide flexible high-performance network technology. If successful, our results will not be limited to the security domain but in addition will enable sophisticated, stateful network processing on modern multi-core platforms in general.

Nicholas Race, Lancaster University
Towards Socially Aware Content Centric Networking

Social networks are having an increasing role in the presentation and personalisation of content viewed across the Internet, with users exposed to a range of content items coupled with information from their social networks. In these scenarios there is currently a limited understanding of the emerging usage patterns and context of use and thus, the potential impact this may have on the delivery platform and underlying network infrastructure. In this project we propose to examine these emerging usage patterns in detail, focusing on the role of the human network and the implications this has for content centric networking.  In broad terms, we seek to develop advanced models of social interaction through long term measurements to develop new mechanisms to inform, create and sustain intelligent content delivery networks.  Initially, this work will focus on defining the relevant metrics that can be leveraged as 'triggers' for informing content networks.  Through knowledge extracted from social graphs and social interactions between users (i.e. the human network) we aim to develop models which can be used to determine or predict likely interest in content items with the intention of improving overall delivery performance.  This research proposition builds upon our previous and ongoing research in this area which has exploited user generated bookmarks to support pre-fetching within CDNs and examined the role of the social graph in informing media distribution systems.

Naofal Al-Dhahir, University of Texas at Dallas
Advanced Interference Mitigation Algorithms for Receivers with Limited Spatial Dimensions

Broadband transmission in shared bands is severely limited by narrow-band interference (NBI). Examples include NBI from Bluetooth, Zigbee, microwave ovens, and cordless phones onto wireless local area networks (WLAN) and AM or amateur radio frequency interference onto digital subscriber line (DSL) modems. Classical NBI mitigation techniques either assume knowledge of the NBI statistics, power level, or exact frequency support which is typically not available in practice or require multiple receive antennas which increase implementation cost. We propose a novel signal processing approach, based on the recent compressive-sensing theory, to estimate and mitigate NBI without making any assumptions on the NBI signal except its narrow-band nature; i.e. sparsity in the frequency domain. In our proposed approach, a novel filter design is proposed to project the received signal vector, which is corrupted by noise and NBI, onto a subspace orthogonal to the information signal subspace. The resulting signal vector will be related to the unknown (but sparse) NBI vector through an under-determined linear system of equations. Then, the sparsity of the NBI vector in the frequency domain is exploited to estimate it uniquely and efficiently by solving a convex optimization program. Our approach   (1) is computationally efficient, (2) is guaranteed to converge to the globally optimum solution,  (3) works with a single receive antenna while being compatible with multiple transmit and/or receive antennas, (4) can mitigate multiple NBI sources simultaneously, (5)  is agnostic to the NBI type and signaling format, (6) makes on prior assumptions on the NBI statistics, power level, or frequency support.

Subhash Suri, University of California, Santa Barbara
Fast Traffic Measurement at all Time Scales

Today's traffic measurement tools for fine-grain time scales are woefully inadequate; yet the importance of  fine-grain measurements cannot be overstated. First, it is  well-known that data traffic looks very different at  different time scales: measurements at 1 second granularity can completely miss the severe burstiness of  traffic observable at 10 microsecond intervals. Second, the  negative synergy of Gigabit link speeds and small switch  buffers has led to the so-called microburst phenomena,  which can cause packet drops resulting in large increases  in latency.  Finally, it is difficult to predict in advance  which time scale is interesting: is it 10 usec, 100 usec, or  1 msec.  Naively running parallel algorithms for each  possible time scale, such as measuring the maximum  bandwidth over any period of length $T$, is prohibitively expensive. Our goal in this proposed research is two-fold: to develop  an efficient and flexible tool for fine-grained traffic  measurements at all time scales, and build an algorithmic  foundation for a measurement-based understanding of fine-scale traffic behavior.  The existing solutions take the  extreme form of full traffic flow information at one end  (e.g. NetFlow) and coarser grain summarization techniques at the other end (e.g. packet counters, heavy- hitters and flow counts). We hope that our tools will  provide an intermediate solution, with a measurement  accuracy comparable to NetFlow but at an overhead cost comparable to the coarser-grain methods. Our  techniques have the flexibility to answer queries after the  fact without the overhead of full traffic flow information.   We will investigate software and hardware implementations.

Dmitri Krioukov, The Cooperative Association for Internet Data Analysis - UCSD
Scalable Interdomain Routing in Hyperbolic Spaces

Rapid growth of the Internet causes concerns among Internet experts about scalability and sustainability of existing Internet interdomain routing. Growing FIB sizes are alarming, as we are approaching fundamental physical limits of router hardware, e.g., related to heat dissipation. Worse yet, it has been proven that the communication overhead of any routing protocol cannot grow slower than polynomially with the size of Internet-like topologies. Our previous URP proposal ``Convergence-Free Routing over Hidden Metric Spaces as a Form of Routing without Updates and Topology Awareness'' (funded in 2007) laid the foundations for an interdomain routing solution with scalability characteristics which are either equal or close to the theoretically best possible. Here we propose to take the next step to determine the applicability of the obtained results to the Internet: develop a new addressing scheme based on the hyperbolic embedding of the Internet AS topology, and evaluate the performance of topology-oblivious routing over this addressing scheme. In this approach, each AS is assigned coordinates in an underlying hyperbolic space. To forward packets, interdomain routers will store the coordinates of neighbor ASes. Given the destination AS coordinate in the packet, they would perform greedy geometric forwarding: select as the next-hop AS, the AS neighbor closest to the destination AS in the hyperbolic space. If the proposed strategy is successful, it will deliver an optimal routing solution for interdomain routing, i.e., a solution with the best possible scaling characteristics, thus resolving serious scaling limitations that the Internet faces today.

Sanjay Rao, Purdue University
Abstracting and simplifying routing designs of operational enterprise networks

Enterprise routing designs are often complex reflecting the plethora of operator objectives that must be met such as administrative policies, and requirements on stability, isolation, and performance. The objectives are often achieved by partitioning the network into multiple routing domains, and using a variety of low-level configuration primitives such as static routes, route redistribution and tunnels. Use of such low-level primitives leads to significant configuration complexity, and is consequently error-prone. The goal of this project is to characterize operational enterprise routing designs with a view to understanding common design patterns and usages. Metrics will be developed to both characterize the configuration complexity of routing designs, and operator performance objectives. Such understanding and metrics will guide and inform the creation of alternate routing designs that meet operator correctness and performance objectives but with lower configuration complexity. Our efforts will be enabled through a tool that we are developing which can reverse engineer the routing design of an enterprise network from low-level router configuration files. The tool will capture aspects such as the number of routing instances in the network, protocols and routers in each instance, and logic that connects instances together.

Dawn Song, University of California, Berkeley 
Sprawl: Real-Time Spam URL Filtering as a Service

With the proliferation of user-generated content and web services such as Facebook, Twitter and Yelp, spammers and attackers have even more channels to expose users to malicious URL links, leading them to spam or malicious websites as they use the web services. Thus it is important to provide URL filtering as a real-time service to protect users from malicious URL links. In this project, we propose to design and develop  a real-time system to filter spam URLs from millions of URLs per day before they appear on web services and users' computers. To accomplish this, we propose to develop a novel web crawler and spam URL classifier that visits URLs submitted by web services and makes an immediate determination of whether the URL should be blocked. By limiting classification to features solely derived from crawling the URL, our system is independent of the context where a URL appears, allowing our system to generalize to any environment and work in conjunction with existing security protections.

Andreas Savakis, Rochester Institute of Technology
Video Event Recognition using Sparse Representations and Graphical Models

The objective of the proposed research is to design a system that can reliably identify semantic events in video with minimum supervision. We employ a novel framework inspired by the emerging field of Compressive Sensing and probabilistic graphical models. Compressive Sensing (CS), or Sparse Representations (SR), facilitates dealing with large amounts of high dimensional data by representing the signal of interest by a small number of samples. Our approach incorporates the construction of a dictionary containing semantically distinguishable features to enable generalization due to appearance variability in a computationally efficient manner. Using the learned dictionary, we form a histogram of features based on bags of visual words, where each bin corresponds to a specific primitive action dictionary element. When a new video sequence is presented to the system, we identify the corresponding dictionary elements by utilizing the Sparse Representations (SR) framework. Probabilistic graphical models, such as conditional random fields, are used to model correlations between primitive actions and identify complex events. Our approach has great potential for applications ranging from video search and similarity across videos to real time activity and event recognition for surveillance, human computer interaction, and smart environments.

Steven Low, Caltech
Architectures, Algorithms and Models for Smart Connected Societies

The power network will undergo in the next few decades the same architectural transformation that the telephone network has recently gone through to become more intelligent, more open, more diverse, more autonomous, and with much greater user participation. This architectural transformation will create an Internet equivalent of the power network, with equally far reaching and lasting impacts.  Our goal is to develop some of the theories and algorithms that will help understand and guide this transformation, leveraging on what we learnt from the Internet and adapting it to the unique characteristics of power networks.

Grenville Armitage, Swinburne University of Technology
Detecting and reducing BGP update noise

In previous work we implemented and evaluated an approach called Path Exploration (PE) Damping (PED) -- a router-level mechanism for reducing the volume of propagation of likely transient update messages within a BGP network while decreasing average time to restore reachability compared to current BGP Update damping practices. We are now interested in more complex BGP update sequences that are still essentially "noise" - sequences that force update processing at one or more downstream peers, yet after the sequence ends there no meaningful alteration to BGP topology. We propose to extend our previous work in three ways -- develop and evaluate techniques for auto-tuning of path exploration damping intervals, detecting and suppressing more complex "noisy" BGP update sequences, and improving our "Quagga Accelerator" toolkit.

Hussein T. Mouftah, University of Ottawa
An Efficient Aggregate Multiparty Signature Scheme for S-BGP AS-PATH Validation

This research study will involve the evaluation of secure multiparty digital signature schemes. Our objective is to design/select a light and secure scheme that enables a receiver to verify the authenticity of an update signed sequentially by multiple ASes as well as the order in which they signed the update. We will evaluate the performance of the schemes in the context of SBGP based on the assumption that some major nodes in the network are considered trustworthy by the rest of the nodes in the network.

Rama Chellappa, University of Maryland
Methods for Video Summarization and Characterization

Methods for Video Summarization and Characterization The tremendous increase in video content is driven by the inexpensive video cameras and the growth of the Internet including the popularity of several social networking and video-sharing websites. This explosive growth in the video content has made the development of efficient indexing and retrieval algorithms for video-data essential. Most existing techniques search for a specific video based on textual tags of the videos, which are often subjective, ambiguous and do not scale with the increasing size of the dataset. We propose to develop novel methods for two problems that will enable efficient video indexing and retrieval techniques. 1. Creating abstracts of video 2. Extracting meta-data for videos Abstracts of videos are created by using an optimization technique that ensures complete coverage of the video as well as guarantees that the retained summaries are diverse. The method for extracting meta-data depends on video classification using chaotic invariants.

Paolo Dini, Centre Tecnologic de Telecomunicacions de Catalunya (CTTC)
Improving User Experienced Quality using Multiple Wireless Interfaces Simultaneously and Network Coding

The improvement in the per-user capacity of next generation wireless access technologies is continuously trailing the ever increasing bandwidth requirements of mobile IP services. For this reason, complementary solutions are being investigated by the research community to fulfill the same requirements using existing wireless technologies. With this respect, a promising solution is to use multiple access technologies simultaneously in order to get an aggregated bandwidth that is sufficient to provide a satisfactory Quality of Experience (QoE) to the user. State-of-the-art solutions for the simultaneous use of multiple access links suffer from the delay dispersion problem, due to the heavy tail characteristic of the delay in today's Internet. In order for this delay not to result in a packet loss due to late arrival at the re-ordering buffer, a high reordering delay is needed. To summarize, existing solutions are effective in bonding multiple access technologies together to increase throughput, but in doing so they necessarily cause an increase in the end-to-end application layer delay, which is inacceptable for many applications. The use of Network Coding allows to effectively trade-off throughput and delay, and to cope with eventual packet losses. In fact, it adds redundancy packets: if some packets of the original flow are late at the re-ordering entity due to heavy-tailed delays, they can be recovered by combining the data and redundancy packets which have arrived in time. In applying the NC paradigm to the scenario that we consider in this proposal, we have to consider the following problems: 1)which coding strategy to choose for a given set of access link characteristics (delay, capacity) and service requirements. 2)how to dynamically adapt the coding strategy to the varying characteristics of wireless links. 3)how to implement the solution transparently to the application. In addition, a specific goal of our research proposal is to test our NC-based solution experimentally in a proof-of-concept testbed using both WLAN and 3G access technologies. EXTREME testbed, used in grant 2008-07109, will be extended to prove the solution proposed. In particular, the encoding and decoding operations will be performed by software modules to be installed in the mobile node and in the bonding server placed between the mobile and its corresponding node.

Ronald (Shawn) Blanton, Carnegie Mellon University
Identifying Manufacturing Tests for Reventing Escape to the Board Level

Our objective is to develop a methodology for analyzing system-level test data to derive improved manufacturing tests for preventing IC-level escape to the system/board-level of integration

Mike Wirthlin, Brigham Young University
Timing Aware Partial TMR and Formal Analysis of TMR

This work will investigate methods for applying partial TMR within FPGA designs in a way that considers the timing of the FPGA design. Conventional methods for applying TMR negatively affect timing and may not meeting timing constraints. New partial TMR selection and application techniques will be investigated and developed. In addition, this project will investigate the ability to verify partial TMR netlists by using commercially available formal verification techniques.

Yun Liu, Beijing Jiaotong University
Research on Consensus Formation on the Internet

Internet has brought a lot of new features in the process of consensus formation, such as the huge amount of participants, the large volume of sample data, the fast speed of diffusion, the acentric distribution, etc. Web users bear various complex features, such as extremism, obstinateness, bounded rationality, etc. These features cause a lot of problem in adopting traditional methods and models in the study of consensus formation on the Internet. The research concentrates on how some unique features of the Internet influence the process of consensus formation on the Internet. It adopts methods from interdisciplinary fields, studying the diffusion process of Internet information, the mapping relation between individual preference and opinion choice, the interaction of individual preference and its action process, the influence of extreme agents and hub agents on consensus formation on the Internet.

Xiaohong Guan, Xi'an Jiaotong University
Abnormal Data Detection and Control Actuation Authentication for Smart Grid

Smart grid technology provides a desirable infrastructure for energy efficient consumption and transmission, by integrating traditional electrical power grids with an information network and may incorporate a large number of renewable resources, new storages and dynamic user demands. Since state measurement, pricing information and control actions are transmitted via the information network, abnormal sensing data and false control command such as hacking, attacks and fraud information commonly encountered in Internet would impose great threat on secure and stable power grid operation. A new method is proposed to detect abnormal sensing data in a smart grid based on information fusion of data consistency check and detection of abnormal network traffics. A new dynamic authentication mechanism is proposed for control commands issued from control center to an RTU by taking into account the structural characteristics of power grid. A new model is proposed to assure run-time trustworthiness of RTU and detect the abnormal RTU without applying embedded trusted platform module.

Qiang Xu
The Chinese University of Hong Kong
Physical Defect Test for NTF with Minimized Test Efforts – Phase II, Discover NTF Devices

No Trouble Found (NTF) devices refer to those devices that fail in system-level test but refuse to fail with a retest on automatic test equipment (ATE) by the IC providers. To solve the NTF problem, we need to first determine those malfunctioning chips on a board that fail system test. Unlike prior solutions that try to build a direct relationship between the test syndromes and the failed ICs (with unsatisfactory diagnosis accuracy), in this project, we plan to add a "bridge" to link these two items to achieve a more fine-grained diagnosis solution, considering the functional behavior of each IC, the system structure, and the boards' repair history. The research outputs of this project would include not only publications in prestigious journals and conferences, but also a novel diagnosis tool that ranks potential NTF devices and suggests new tests for enhanced diagnosis accuracy.

Steven Swanson, University of California, San Diego
Characterizing Solid-State Storage Devices for Enterprise Applications

Flash-based storage is poised to revolutionize how computer store and access non-volatile data. Raw flash chips and solid-state drives (SSDs) offer the promise of orders of magnitude reduction in latency and commensurate gains in bandwidth. Many segments of the computing industry are counting on SSDs to enable a range of exciting applications ranging from intelligent mobile devices to high-end networking equipment to "cloud" storage systems to mission-critical high-performance storage applications. We propose to evaluate discrete flash devices and flash-based solid-state drives to understand their performance, efficiency, and reliability characteristics.  The data that these studies provide will place systems designers on a firm footing with respect to behavior of the flash-based components they wish to include in products. As a result, designers will be able to employ these devices in situations that minimize their shortcoming and allow their strengths to shine through. The result will be systems that are faster and more efficient, but that can also meet the strenuous reliability requirement of the most demanding applications.

Saleem Bhatti, University of St Andrews / Computer Science
IP Scalability Through Naming (IP-STN)

Growing concerns about the scalability of addressing and routing has prompted the research community to consider the long-discussed idea of the Identifier / Locator (I/L) split as a solution space. As documented by the IRTF Routing Research Group (RRG), some proposals for an I/L approach to networking are focussed towards ``in-network'' implementation models, such as LISP, while some are focussed on end-system implementation models, such as the Identifier Locator Network Protocol (ILNP). As the complexity of the current in-network engineering landscape grows, introducing further complexity within the network may not be scaleable long term, and may destabilise existing, mature technology deployments (such as MPLS). This proposal is to build c-ILNPv6, a prototype implementation of the core functionality of ILNP based on IPv6. ILNPv6 has almost no impact on existing core network engineering as it is an end-system implementation re-using the existing IPv6 packet format and address space. Use of ILNPv6 would allow multi-homing to be implemented without any additional routing table entries in the DFZ. While moving complexity away from the core, ILNPv6 also offers the opportunity for new developments in adjacent areas such as mobility.

Kamesh Munagala, Duke University
Richer Models and Mechanisms for Internet Auctions

Auction theory has a rich and growing body of work both in economics, and more recently in computer science. The rise of modern applications stemming from the internet (mainly through search engines, social networks, and auction sites) has been the main driving force behind the development of richer models and mechanisms for auctions. Though internet auctions have largely derived their theory and implementations from traditional auctions, there are several features, particularly externalities, budgets, and presence of side-information, that distinguish these auctions, and which call for a paradigm shift in their design. The PI seeks to develop new models and algorithms to study these allocation problems.

Florian Metze, Carnegie Mellon University
Ad-Hoc Microphone Arrays

Recognition of speech recorded through distant microphones (i.e. table-top microphones in video conferences, hands-free units for mobile devices) continues to be one of the most challenging tasks in speech processing, with word error rates around 30% even in otherwise ideal conditions. This is mainly due to a mismatch of acoustic models, which were trained on single-channel, close-talking data, which is not encountered during testing. Single channel signal enhancement improves the perceptual quality of speech, but does not improve recognition. Microphone arrays can be used in some situations, but they require careful placement and speaker localization and tracking to function well, which is often impractical. We propose a training scheme for acoustic models using multiple arbitrarily placed microphones ("ad-hoc microphone array") in parallel, instead of just one. By training and adapting the acoustic model on multiple slightly different inputs, instead of just one, it will generalize well to microphone conditions during testing. Recent advances in discriminative training of acoustic models allow joint optimization of speaker dependent acoustic models in multiple discriminatively trained feature spaces, derived from multiple microphones. This allows training acoustic models independently of spatial arrangements in the room. Preliminary results using non-discriminative transformations confirm the effectiveness of this approach, and suggest discriminatively trained models should be sufficient for indexing of end-user videos recorded with hand-help devices, transcription, summarization, or translation of content in tele-presence systems, or other applications.

Richard Stern, Carnegie Mellon University
Robust Speech Recognition in Reverberant Environments

Despite decades of promise, the use of speech recognition, speaker identification, and related speech technologies in practical applications remains extremely limited.  This disappointing outcome is a consequence of a lack of robustness in the technology, as it is observed that the performance of speech-based systems in natural environments, and especially those in which the microphones are distant from the speaker, is far worse than the performance that is attainable with a head-mounted microphone. Any type of distant-talking speech recognition system, as would be used in systems that incorporate voice input/output in applications such as videoconferencing or the transcription, diarization, and analysis of meetings, must cope with the effects of both additive noise and the reverberation that is present in natural indoor acoustical environments.  While substantial progress has been made in the treatment of additive noise over the past two decades, the problem of reverberation has been quite intractable until recently, and the accuracy of speech recognition systems and related technologies, degrades sharply when these systems are deployed in natural rooms. The goal of the proposed research is to develop paradigm-shifting signal processing and analysis techniques that would provide sharply improved speech recognition accuracy in reverberant environments.  Toward this end we will extend and combine on two very complementary technologies that have shown great promise (and some success) in ameliorating the difficult problem of machine speech perception in reverberation.  The first approach is motivated by the well-documented ability of the human auditory system to separate sound sources and understand speech in highly reverberant environments.  Toward this end we have developed feature extraction procedures that exploit the relevant processing by the auditory system to provide speech representations that maintain greater invariance in the presence of signal distortion than is observed with conventional feature extraction.   The second approach is based on blind statistical estimation of the signal distortion introduced by the filtering effects of room acoustics.  Solutions are developed at multiple levels of processing that are based on parameter estimation, and that exploit constraints that imposed by all reverberant environments and that are imposed by the nature of speech itself.   The proposed work consists of a combination of fundamental research in robust signal processing for reverberant environments and application development on tasks of relevance to Cisco.  The fundamental research will include further development of the auditory-based approaches, and research in the combination of approaches based on auditory modeling and statistical modeling. We are also committed to developing our systems in a context that is relevant to the mission of the Speech and Language Technologies (SaLT) section of the Emerging Technologies Group.   We intend to deveop and benchmark our algorithms using data that are relevant to the group, to the extent possible.   In all cases we will transfer working technology to the SaLT group, including computationally-efficient realizations that can be run in online fashion for real-time applications as well as more complex implementations that will provide superior performance for non-real-time speech analysis, with complementary solutions developed for single microphones, pairs of microphones, and microphone arrays.  The success of our efforts will be measured by the extent to which our approaches provide benefit compared to the conventional MFCC/PLP/AFE front ends in current use by the industry.  

Prashant Shenoy, University of Massachusetts Amherst
Architectures, Algorithms and Models for Smart Connected Societies

Today's smart electric grids provide support for various technologies, including net metering, demand response, distributed generation, and microgrids. As these meters become more sophisticated, they are able to measure household power consumption at ever finer time-scales. Such fine-scale measurements have serious privacy implications since they inadvertently leak detailed information about household activities.   Specifically we show that even without any prior training, it is possible to perform simple analytics on the data to use patterns of power events to extract important information about human activities in a household. To mitigate these problems, we will  focus on design and implement a privacy-enhancing smart meter architecture that enables electric utilities to achieve their net metering goals, while respecting the privacy of their consumers. Our approach is based the notion of Zero-Knowledge proofs and provides cryptographic guarantees for the integrity, authenticity, and correctness of payments, while allowing variable pricing, demand response, etc., all without revealing the detailed power measurements gathered by smart meters. We will implement our approach on a testbed of 50 prototype TED/linux sensors that mimic smart meters  and experimentally demonstrate the efficacy of our approach.

Balaji Prabhakar, Board of Trustees of Leland Stanford Junior University
Stanford Experimental Data Center Lab

The Stanford Experimental Data Center Lab has a broad research agenda, spanning networking, high speed storage and energy efficiency. In networking, a central project is to design and build a prototype of a large (hundreds of ports), low-latency, high bandwidth interconnection fabric. Other major projects include rapid and reliable data delivery, and software-defined networking. Storage is viewed as existing largely, if not entirely, in DRAM servers; this is the focus of the RAMCloud project. The overall goal is to combine the RAMCloud project and the interconnection fabric project so that applications which access data can do so with a latency less than 2 to 3 microseconds.

Patrick E. Mantey, University of California, Santa Cruz
Smart Meters and Data Networks can Improve Operation and Reliability of Electric Power Distribution

Smart meters are now being widely deployed, with their primary use being to replace the human reading of meters at consumer sites.  Future use of these smart meters is also to provide some usable consumption data to consumers to help them manage their electrical use when prices vary depending on the time of data (TOU: time of use metering) and on conditions in the power grid (CPP: critical peak pricing).  The infrastructure for smart meters results in an (IP mesh) data network that overlays the power distribution subsystem.  The focus of this work is to examine how these smart meters, and their data network, augmented by making other components of the power distribution subsystem "intelligent" (i.e. gathering data and communicating that data), can improve the quality and reliability of power distribution.  Smart meters today measure in real-time the variables associated with the energy delivered to each consumer (e.g. voltage level, harmonics, real power, reactive power, total energy).  The smart meter mesh network delivers selected data from the smart meters to the utility.  Today that is primarily used to replace meter reading, bringing the total accumulated consumed energy amount to the utility for billing.  But using this network and exploiting more fully the measurements available from smart meters, combined with data from other distribution system components (e.g. transformers, relays, etc.) when connected to this data network, and used with data now available at the distribution substations, could help in discovering pro-actively when components are failing, and improve the response to service interruptions. From the proposed research, we expect to identify the data and the processing needed to make this (more fully "intelligent") distribution grid more reliable.  This will involve using a combination of analysis and modeling / simulation, and should lead to an economic rationale for adding additional "smart" components to power distribution subsystems.    Our campus electrical distribution system will be part of our laboratory for this work, providing real data and a real-world context for the problems addressed via analysis and simulation in this research. The work proposed will strengthen the efforts of the Baskin School of Engineering at UC Santa Cruz to educate engineers relevant to "smart grid", with expertise in power systems and data networks. 

Costin Raiciu, Universitatea Politehnica Bucuresti
Evolving Networks with Multipath TCP

Multipath TCP is an idea whose time has come. The IETF is standardizing changes to the TCP protocol such that it can use multiple paths for each connection. The benefits are increased robustness, network efficiency, and avoiding congested paths, providing a better end-user experience. MP-TCP is applicable not only to the Internet, but also in data centers, where it can seamlessly utilize the many alternative paths available between nodes, increasing utilization and fairness. Our previous research has focussed on making the MP-TCP protocol deployable, and on congestion control to make it efficient. This congestion control has now been adopted by SCTP.  In this proposal we take the next step, asking how networks might evolve to better support MP-TCP. Our focus is on deployable changes to the network that would enhance the overall performance. We will explore changes to multipath routing in our quest to explore the interactions between Internet traffic engineering and MP-TCP. We will also explore changes to existing data center topologies and routing protocols that play to MP-TCP's advantages. This proposal is linked to and has the same content as the proposal from Prof. Mark Handley, UCL.

Mark Handley, University College London
Evolving Networks with Multipath TCP

Multipath TCP is an idea whose time has come. The IETF is standardizing changes to the TCP protocol such that it can use multiple paths for each connection. The benefits are increased robustness, network efficiency, and avoiding congested paths, providing a better end-user experience. MP-TCP is applicable not only to the Internet, but also in data centers, where it can seamlessly utilize the many alternative paths available between nodes, increasing utilization and fairness. Our previous research has focussed on making the MP-TCP protocol deployable, and on congestion control to make it efficient. This congestion control has now been adopted by SCTP. In this proposal we take the next step, asking how networks might evolve to better support MP-TCP. Our focus is on deployable changes to the network that would enhance the overall performance. We will explore changes to multipath routing in our quest to explore the interactions between Internet traffic engineering and MP-TCP. We will also explore changes to existing data center topologies and routing protocols that play to MP-TCP's advantages. This proposal is linked to and has the same content as the proposal from Dr Costin Raiciu, PUB.

Albert Cabellos, Technical University of Catalonia (UPC) - Barcelona Tech
How Can LISP Improve Inter-domain Live Streaming?

In the current Internet typically live video streaming services operate at the application layer (e.g, P2P Live) and not at the network layer. The main reason is that multicast is not widely available at the core-routers because of its high deployment and operational costs. In this project we aim to develop a new multicast protocol taking advantage of LISP, therefore avoiding deployment at these core-routers. 

Mohammad Tehranipoor,
University of Connecticut
Analysis and Measurement of Aging Effects on Circuit Performance in Nanometer Technology Designs

Current static and statistical static timing analysis (STA and SSTA) tools take into account the process and parasitic parameters to calculate circuit timing and perform timing closure. However, performance of a chip in the field depends on many other parameters namely workload, temperature, voltage noise, and lifetime duration. It is vitally important to take these into account during pre-tapeout timing analysis. Among all the parameters, workload presents itself as a major issue since in many cases during chip design and fabrication the workload is unknown to the designers and test engineers. In this project, we develop methodologies for performance optimization with and without workload to ensure that circuit meet timing budget in the field. In addition, we will generate representative critical reliability (RCR) paths to be monitored in the field. These paths represent the timing of the circuit and age at the same rate as the most critical reliability paths in the circuit. Our previously developed sensors will be utilized to measure the RCR paths in the field accurately. This project also helps perform better performance verification and Fmax analysis.

Ljudevit Bauer, Carnegie Mellon University
Usable and Secure Home Storage

Digital content is becoming common in the home, as new content is created in digital form and people digitize existing content (e.g., photographs and personal records).  The transition to digital homes is exciting, but brings many challenges.  Perhaps the biggest challenge is dealing with access control.  Users want to selectively share their content and access it easily from any of their devices, including shared devices (e.g., the family DVR).  Unfortunately, studies repeatedly show that computer users have trouble specifying access-control policies.  Worse, we are now injecting the need to do so into an environment with users who are much less technically experienced and notoriously impatient with complex interfaces.  We propose to explore an architecture, mechanisms, and interfaces for making access control usable by laypeople faced with increasing reliance on data created, stored, and accessed via home and personal consumer electronics.  The technologies that we plan to study and build on include logic-based access control, expandable grids interfaces, semantic policy specification, and reactive policy creation.

Aditya Akella, The University of Wisconsin–Madison 
Measurement and Analysis of Data Center Network Architectures

Data centers are playing a crucial role in a variety of Internet transactions. To better support current and future usage scenarios, many research efforts have proposed re-architecting key aspects of data centers, including servers, topology, switching, routing and transport. Clearly, the performance of these proposals depends on properties of data center application workloads and their interaction with various aspects of the data center infrastructure (e.g., physical host capabilities, nature of virtualization support, multipath, buffering etc.). Unfortunately, however, most existing designs are evaluated using toy application models that largely ignore the impact of the above factors on the proposals' effectiveness. The goal of our work is to create representative large-scale application traffic models for data centers based on measurements of data centers in the wild, and to employ the models to accurately compare the trade-offs of existing design proposals. To date, our group has collected and analyzed packet traces, SNMP logs, and topology data from 5 private data centers and 5 cloud data centers to examine a variety of DC network traffic characteristics, such as applications deployed, switch-level packet transmission behavior, flow-level properties, link utilizations, and the persistence and prevalence of hotspots. The proposed research will expand this work in three broad ways: (1) we will expand our empirical observations through additional fine-grained data collection at hosts, switches and links in data centers, as well as additional empirical results on transport and application properties; (2) we will create rich application models capturing dependencies across tasks and identifying key invariant properties of application network and I/O patterns; and, (3) we will use the measurements/models to systematically evaluate DC design proposals while holistically considering host, switch, application and DC-wide properties.

Leonard J. Cimini, University of Delaware
Optimizing Resources for Multiuser-MIMO in IEEE 802.11ac Networks Using Beamforming

The proposed work is a continuation of a successful project in understanding fundamental issues in MIMO beamforming (BF) for IEEE 802.11 networks.  The enhanced capabilities promised by 802.11ac come with many challenges, in particular, the task of effectively managing system resources. Fundamental questions include:  How to most effectively use the available DoF?  Which users to serve, taking into account fairness and aggregate throughput?  And, how many streams to provide to each user?  In the coming year, we propose to continue our research on MU-MIMO BF.  Important tasks that we would like to tackle include: develop non-iterative algorithms that achieve a balance between interference and desired signal levels for any number of users and available DoF; determine guidelines for when to use MU-MIMO rather than provide multiple streams to only one user, what information is required at the AP to make this decision, and how to perform mode switching; quantify the sensitivity of MU-MIMO to imperfect CSI and develop robust BF algorithms; and evaluate a variety of scheduling/signaling strategies in MU-MIMO environments.

Jacob A. Abraham, The University of Texas at Austin
Mapping Component-Level Models to the System and Application Levels

Complexity is the primary stumbling block to the effective analysis of computer systems.  Thus, analyzing complex systems, including Systems-on-Chip (SoCs), for verification, test generation or power consumption is exacerbated by the increasing complexity of integrated circuits.  Most commercial tools can only deal with relatively small components in the design.  The proposed research is directed towards developing powerful algorithms for mapping component-level models to the system level.  We have developed mapping algorithms for conventional instruction-set processors.  We will extend the analysis techniques to general systems, including application-specific processors, which have not been the target of research.  The research will focus on the following problems. 1. Extending the research in mapping techniques for conventional processors to general systems, including application-specific processors such as packet processors. 2. Developing new techniques for accurate estimation of the fault coverage of iinput sequences at the system level. 3. Developing new abstraction techniques to reduce the complexity of the mapping process.

Michael Nicolaidis, Institut Polytechnique de Grenoble / TIMA
Techniques for Soft Error (SEU/SET) Detection and Mitigation in ASICs

Today, large ASICs can have on the order of ten million flip-flops and it is challenging to meet overall system reliability targets without taking active steps to manage the effect of Single Event Upsets (SEUs) and Single Event Transients (SETs). Even though the effect of the vast majority of soft error events is benign, a small number of upsets do result in a high-severity system impact. The system-level effects of soft-errors vary with the application. In many systems, if the events are detected and the system recovers quickly without impacting the end-user experience, then they can be tolerated at a relatively high frequency. Thus the main focus of this work is on the efficient detection of flip-flop and logic soft-errors. Many innovative techniques exist for circuit level detection of soft errors. Some of these are dependent on specific clocking methodologies while others are applicable to designs following an ASIC design flow (D-type flip flop design paradigm). This work will explore various approaches to SEU and SET detection while striving to minimize the area, timing and power penalties, including design sensitivity analysis, timing slack analysis, automated parity generation, automated double-sampling insertion, and selective substitution with hardened flip-flops.

Apinun Tunpan, Asian Institute of Technology
DTNC-CAST: A DTN-NC overlay service for many-to-many bulk file dissemination among OLSR-driven mobile ad hoc network partitions for disaster emergency responses

The problem of creating emergency communication networks, especially in the situations where a large-scale disaster (e.g. a Tsunami) wipes out traditional telecommunication infrastructure, still remains a great challenge.  In the first few hours after the large-scale disaster, rescuers may deploy Mobile Ad hoc Networks (MANET) to setup make-shift communication [1]. There are major concerns in deploying such post-disaster MANETs.  One, running MANET in unfavorable post-disaster environments is prone to having very disruptive connectivity, resulting in multiple MANET partitions which occasionally merge and split. Two, there are needs to conserve the power and to make efficient use of limited communication bandwidth which is likely shared among many rescuers and responsible agencies who have to disseminate bulk files, in a many-source to many-destination manner. Three, the disaster emergency MANET should be designed to operate in a manner that suits the information flows of the on-going rescue operations. To deal with disruptive networks, Disruption/Delay Tolerant Networking (DTN) [2] can be deployed. During the course of bulk file transfer, we expect the network to be highly disruptive because of intermittent connectivity among MANET nodes. Hence, DTN would be more suitable in such emergency situations rather than traditional MANET networks where a slight disruption between two peer-to-peer nodes would have caused failure of the whole bulk file transfer. Our past MANET field tests have revealed that voice over IP and voice/video streaming become very much useless when MANET connectivity becomes more disruptive. Network Coding (NC) [4] is an important paradigm shift in that it allows intermediate router nodes to algebraically encode (i.e. mix) and decode (i.e. de-mix) data for multiple on-going sessions.  NC can be potentially used to boost bandwidth usage efficiency, to increase communication robustness, and to design new ranges of networked applications. The success of NC depends very much on the topology of the network and the choices of network codes. The main challenges we propose to solve in this proposed research are i) to incorporate both DTN and NC into a real-world testbed for the post-disaster emergency MANET communication scenarios, and ii) to utilize the topology information available from the underlying proactive OLSR MANET routing protocol to make decisions for DTN routing and to select network codes.  We name our proposed system DTNC-CAST.  DTNC-CAST is intended for DTN bulk file dissemination from multiple sources to multiple receivers among intermittently connected OLSR MANET partitions. We propose to realize DTNC-CAST using the DTN2 reference Implementation (DTNRI). Lastly we plan to field test DTNC-CAST in scenarios corresponding to post-disaster rescue operations

Jun Bi, Tsinghua University
Study on Identifier-Locator Network Protocol (ILNP)

ILNP (Identifier-Locator Network Protocol) has become one of the most promising approaches for routing architecture in future Internet. The IRTF RRG recently recommended in RFC6115 that ILNP be pursued within the IETF. However, there are still a lot of technical questions on ILNP which nobody can clearly answer, due to lack of experiments of ILNP. The goals of this project will be developing an ILNP prototype and studying ILNP based on the it, including: (i) developing an implementation of ILNP based on Linux according to the basic ILNP specification; (ii) study the possible problems caused by mobility, especially in the scenario that both hosts are moving; (iii) research into the ID/Locator mapping system, including the improvement of DNS and DNSSEC system for more efficient and secure ID/Locator mapping; (iv) working together with Cisco to improve ILNP based on the study results; (v) working together with Cisco to submit related RFC drafts describing the implementation, the problems encountered, and the solutions explored.

Lynford Goddard, University of Illinois at Urbana-Champaign
The CISCO UCS for Engineering Design: Applications in Electromagnetic and Photonic Simulations

Introduction: The Finite Element Method (FEM) is a general computational technique for numerically solving partial differential equations. FEM is widely used in application simulation in nearly every field of engineering [1-3], as well as in physics, chemistry, geology, environmental sciences, and even economics and finance. The power, versatility, and accuracy of the technique are evidenced by its successful applications to many practical problems in these areas and by the existence of a wide variety of commercial CAD software that employ the technique. FEM has several advantages over other general numerical methods for solving partial differential equations such as the finite difference method, including more accurate discretization of the problem domain, better control of stability and convergence, and a potential for higher overall accuracy. However, in contrast to the finite difference method, parallelization of the FEM algorithm on conventional parallel computing platforms is less straightforward.  Algorithms used to implement FEM require quick communication of massive amounts of intermediate results among the distributed cores during computation. Consequently, in computational electromagnetics and photonics, the use of FEM is primarily restricted to small computational domains, i.e. to model structures and devices that are on the order of the electromagnetic wavelength or smaller. A suitable parallel platform would therefore advance the application of FEM to problems involving structures that are significantly larger than a wavelength. There is much interest from engineers and scientists in using these structures to simulate antennas, microwave systems, airplanes, missiles, fiber optic components, and integrated photonics devices and circuits using FEM. These structures are significantly larger than the wavelength and are difficult to simulate using FEM because of the massive memory requirements and resulting need for an efficient and large bandwidth communication network. Moreover, to perform device design, several simulation runs with different device geometries are required and thus necessitate further computation. Statement of Work: The PI and his team propose a three year phased approach to apply the unique architecture of CISCO's Unified Computing System to demonstrate that this computing medium is effective for advanced FEMs. In particular, the team will develop open source software based on FEM for simulation and design of electromagnetic and photonic devices and systems. The software will be optimized to run on CISCO's UCS platform by using algorithms that exploit the unique memory features and shared network capabilities of the UCS platform in an efficient way. The overall motivation is to perform accurate 3D photonic simulations on large computational domains. To minimize project risk, we will take a top down multi-resolution approach and adapt the project based on sequential learning. In year 1, we will focus on porting Charm++ onto UCS and on simulating simple photonic structures using just 1-2 nodes. The goal will be to publish a conference paper on device and/or computer science aspects of the research. In years 2 and 3, we will apply lessons learned and develop fast accurate simulation of complex structures [4-8]. Also, the code will be designed to run efficiently given available resources. Specifically, load balancing and communication optimization will be implemented. The proposed research plan offers rich opportunities for teaching and training of graduate and undergraduate researchers in engineering design and device simulation, parallel computing, and the UCS platform. In addition, the project will provide a natural recruitment pipeline of B.S., M.S., and Ph.D. graduates with in depth training on and experience with the UCS platform. Research and teaching will be integrated through the development of a new graduate seminar course: "Modeling of Photonic Devices," which will also serve to introduce the platform to students from other research groups. Research and teaching results will be disseminated in journals and conferences to enhance the current understanding of parallel computing architectures and their application to electromagnetic and photonic device modeling.

George Porter, University of California, San Diego
Incorporating Networked Storage within Highly Efficient Data-intensive Computing

To date, the performance of Data-intensive Computing Systems (DISC) has focused primarily on their performance in isolation.  Largely ignored has been the performance implications of migrating data to and from external, networked storage as well as supporting high performance in multi-tenant environments with background traffic.  In this proposal, we argue for taking a more holistic view of DISC computing that includes the I/O necessary to connect these DISC systems with the actual sources and sinks of data, located elsewhere in the enterprise.  The motivation for our approach is to take advantage of the ability to optimize I/O and to extract as much parallelism as is available from the networked storage itself and the network paths interconnecting that storage to the DISC system so that we can improve total system performance.  An unfortunate result of not considering this problem in this way might be a system that is highly tuned and fast by itself, but spends the majority of its time loading and storing data to networked storage.

Thomas Huang, The Board of Trustees of the University of Illinois 
A Mobile and Asymmetrical Telepresence System with Active 3D Visual Sensing

A new paradigm of asymmetrical telepresence is proposed to meet the increasing demand of communication and collaboration between stationary and mobile devices. By leveraging computer vision analysis and synthesis techniques, we perform computation intensive tasks at the resourceful stationary end as much as possible, and communicate the essential visual information with mobile ends as little as possible. The visually immersive experience at each end point is maximized conditioned on local infrastructure constraints. The functionalities we shall develop will include: person identification, emotion recognition, 3D face rendering (avatar) and super-resolution. A critical issue is accurate 3D human pose estimation. A novel algorithm using multiple view and 3D depth information is proposed, which is expected to give more robust and accurate pose estimation than existing methods. We also expect to integrate the proposed asymmetrical paradigm with Cisco's current TelePresence products, and build a prototype system with active visual sensing functions enabled by our existing and proposed vision algorithms.

Alex Bordetsky, Naval Postgraduate School
Space-Based Tactical Networking

Our primary objective is to create a testbed environment to integrate satellite networking platforms with emerging ad-hoc tactical networks using Cisco radio aware routing approach.  Its intended purpose is to support exercises and experimentation in a variety of environments including battlefield telemedicine, maritime interdiction operations (MIO), intelligence, surveillance and reconnaissance (ISR), and casualty evacuation in a way that is transparent to the warfighter.  We will enable other technologies to bring timely, necessary information to decision makers by developing the technologies that provide reliable, scalable networking services to austere environments, regardless of existing infrastructure in a way that requires no additional user tasks or trainingIn order to achieve our objective, our experimentation process will leverage IRIS and emerging radio aware routing integration with ad-hoc tactical networks, mobile sensors, and unmanned systems primarily in MIO, Battlefield Medical, and fire support scenarios including synchronous global reachback to subject matter experts.  The introduction of IRIS and radio aware routing will eliminate the traditional "bent pipe", circuit switched approach to satellite communications and implement a space based packet-switched solution providing the war fighter with net-enabled capabilities. We propose to explore, model, and simulate a constellation of satellites in orbit with IRIS type routers on board, with an emphasis on Delay Tolerant Networks (DTN) and radio aware routing to establish inter-satellite links.  In an effort to minimize the power/size required for the handsets to reach the satellites, we will focus our efforts on the study Medium Earth Orbit (MEO), Low Earth Orbit (LEO).  Additionally, we would apply the fundamentals of orbital mechanics and unique aspects of the space environment to predict link quality in order to maintain quality of service (QOS) despite dynamic link status.

Olaf Maennel, Loughborough University
Towards a Higher-Level Configuration Specification Language

We believe that advances in computer science in the 21st century should enable us to better tackle the increasing complexity in network configurations. There is a need to step back from low-level tools and conduct research into languages that express networks in a holistic way. Such languages need to have a set of fundamental conditions, such as provable protocol convergence. The work will investigate the feasibility of such a language to express high-level network constraints (for example, those that come from SLAs). Such a language will by its very design guarantee sane configurations where possible and in other cases indicate potential problems.

Konstantinos Psounis, University of Southern California
Efficient airtime allocation in wireless networks

The wireless networks of the future promise to support a number of useful and exciting applications, including home networking, vehicular networks providing increased safety and a plethora of entertainment options, social networking over an ad hoc network of users, environmental monitoring, and, more general, pervasive communication anywhere and anytime. These applications rely on a combination of single-hop wireless technologies like enterprise WiFi, with multi-hop wireless networking technologies like sensor networks, mesh networks, mobile ad hoc networks, and disruption tolerant networks. The largest challenge with these new technologies is how to offer good performance at a reasonable cost despite the increased wireless interference and the highly distributed, decentralized architecture. Specifically, one of the most important challenges is how to allocate the available bandwidth and airtime of these networks in an efficient manner, without requiring to change the decades old network stack which includes protocols like 802.11 and TCP. In this proposal, building upon our prior work, we plan to design, implement, and test a simple yet near-optimal scheme that allocates bandwidth and airtime in single- and multi-hop wireless networks and is fully transparent to existing protocols. Our scheme will have native support for mobile users and, in collaboration with CISCO engineers, we plan to integrate it into a multi-technology networking platform which allows vehicles to connect to the network anywhere and anytime, using a number of technologies including WiFi, WiMAX, cellular data transfer, ad hoc communication, and disruption tolerant communication. The proposal is directly relevant to a number of current projects running at CISCO, and we will collaborate closely with at least one CISCO team which works on such projects.

Carsten Bormann, Universität Bremen TZI
Evolving the constrained-REST ecosystem

The IETF is creating a set of application layer specifications for constrained devices ("Internet of Things").  While the initial design is nearing completion, a number of desirable next steps have become apparent: CoAP needs a security architecture that scales down to constrained devices, but still provides the security needed in the global Internet and does not become an impediment to deployment; resource discovery mechanisms are required that maximize the ability of components at the application layer to work with other components without needing application-specific knowledge, enabling management, configuration, and even some operation of new components; interoperation is needed with protocols beyond HTTP, again without becoming too tied up to specific applications, enabling integration e.g. with presence protocols; and significant benefits may be expected from using mobile code as a way to extend functionality of components without needing to upgrade their software each time, enabling more meaningful interoperation. The proposal seeks to address these issues and opportunities in a way that generates results that feed into IETF specifications, leveraging the existing successes of the research team in this space, and planning for close collaboration with Cisco engineers engaged in IETF activities.

Yogendra Joshi, Georgia Tech
Experimental and Theoretical Evaluation of Alternate Cooling Approaches to 3D Stacked ICs and Their Implication on the Electrical Design and Performance

Cooling is a well known critical bottleneck in 3D ICs. Moreover, the cooling solution itself has a large implication on the 3D electrical design of the chips and packages. The proposed research by Drs. Joshi and Bakir will explore the design and technology implications of two different cooling solutions for 3D architectures, and define their limits and opportunities. Namely, we are interested in 1) conventional air cooling, and 2) the use of single-phase pumped liquid cooling using microchannels/pin fins. This work will be accomplished via both experimental test beds and compact physical modeling.  The modeling will take into account TSV density, chip thickness, and the thermal resistance of the interface between ICs. We will also do "first order" measurements exploring the impact of chip temperature rise on the TSV and on-chip interconnect electromigration and resistance, and the subsequent impact on latency and andwidth. This will be the first time that physical modeling and experimental data are used to benchmark the thermal and electrical performance of 3D ICs based on the choice of cooling technology, accounting for hot spots and exploring both steady state and transient effects. 

Li Chen, University of Saskatchewan
SEE/Noise Study on a Power Controller and TCAM

The proposal is to study the single event effects and noise in a digital power controller and TCAM. These devices were identified sensitive to neutron/noise in qualification test. Cisco believed that it could be a general issue for industry. In order to let industry aware of the issue, understand the underline physics, mechanism and hence to improve the future design, they decided to follow up by moving it into a fundamental research with my university group. Our research group has the capabilities of laser measurement equipment, circuit level simulation, 3D-TCAD simulation, and most importantly, we have the established expertise in this area. We would like to combine the pulse laser and simulation tools to conduct the research, specifically noise study using pulse laser is a new approach in research field. We expect to publish several papers in the next 1-2 years. It will demonstrate Cisco's visibility in this area and also provide my research group content and funding for graduate students.

Diana Marculescu, Carnegie Mellon University
Modeling and Mitigation of Aging-Induced Performance Degradation

Negative Bias Temperature Instability (NBTI) is a transistor aging phenomenon that causes degradation in digital circuit performance with time. It occurs when PMOS transistors are stressed under negative bias at elevated temperatures, leading to an increase in transistor threshold voltage and correspondingly, a decrease in its switching speed. Similarly, Positive BTI (PBTI) affects NMOS transistors when being stressed under positive bias and manifests itself as a reduction in circuit performance as well. Another key factor in determining long-term circuit reliability is Hot Carrier Injection (HCI), which induces performance degradation by decreasing the driving strength (drain current) of both PMOS and NMOS transistors. As opposed to BTI that highly depends on the probability of transistors under stress (i.e., stress probability), the rate of degradation due to HCI is correlated with the frequency of transistor switching (i.e., switching activity). As technology scales, higher operating frequencies and on-die temperatures, and smaller transistor geometries have resulted in these aging effects becoming an increasingly dominant contributor to decreased system reliability. The work proposed herein addresses the impact of aging mechanisms including BTI and HCI, by developing efficient and cost-effective techniques for modeling and mitigation in the context of aging-induced performance degradation and associated timing failures.

Bharat Bhuva, Vanderbilt University
Soft Error Response of 3D IC Technologies

3D IC technologies integrate multiple die in the same package for increased performance.  However, the resultant increased transistor density may increase the soft error FIT rates beyond that of cumulative FIT rates of individual die.  This prospect of increased FIT rates must be investigated to assure that designers do not end up with reduced system-level performance.  This proposal is for the evaluation of soft error rates for 3D IC technologies.  The technology characterization will be carried out using 3 tiered 3D IC technology for single-event transients and charge collection measurement to quantify FIT rates.

Nelson Morgan, The International Computer Science Institute
Robust recognition of speech from impromptu videos

Given our current cortically-inspired "manystream" front end, which has successfully reduced recognition errors on artificially degraded speech signals, we will experiment with recognition of speech from impromptu videos. We will then improve the system further by experimenting with alternate auditory representations to replace the log mel spectrum as the time-frequency representation that is further processed by our manystream nonlinear filtering module. All methods will be compared with a baseline method that was specifically developed for robustness to additive noise, namely the ETSI-standard AFE.

Guohong Cao, The Pennsylvania State University
Supporting Mobile Web Browsing with Cloud

Smartphone is becoming a key element in providing greater user access to the mobile Internet. Many complex applications, which are used to be only on PCs, have been developed and run on smartphones.  These applications extend the functionalities of smartphones and make them more convenient for users to be connected. However, they also greatly increase the power consumption of smartphones and many users are frustrated with the long delay of web browsing when using smartphones. We have discovered that the key reason of the long delay and high power consumption in web browsing is not due to the bandwidth limitation most of time in 3G networks. The local computation limitation at the smartphone is the real bottleneck for opening most webpages. To address this issue, we propose an architecture, called Virtual-Machine based Proxy (VMP), to shift the computing from smartphones to the VMP which may reside in the cloud. To illustrate the feasibility of deploying the proposed VMP system in 3G networks, we propose to build a prototype using Xen virtual machines and Android Phones with ATT UMTS network to design and realize the goal of the VMP architecture. We will address the scalability issues, and optimize the performance of the VMs on the proxy side.  We will consider compression techniques to further reduce the bandwidth consumption and use adaptations to address the poor network conditions at the smartphone.

Amit Roy-Chowdhury, University of California, Riverside
Optimizing Video Search Via Active Sensing

Large networks of video cameras are being installed in many applications, e.g., surveillance and security, disaster response, environmental monitoring, etc. Since most existing camera networks consist of ?xed cameras covering a large area, it results in situations where targets are often not covered at the desired resolutions or viewpoints, thus making the analysis of the video dif?cult. For example, if we are looking for a particular person, we may encounter the problem that the number of pixels on the face is too few (i.e., very low resolution) for face recognition algorithms to work reliably. In this project, we propose a novel solution strategy that would integrate the analysis and sensing tasks more closely. We call this active sensing. This can be achieved by controlling the parameters of a PTZ camera network to better ful?ll the analysis requirements, thus allowing for maximal utilization of the network, providing greater ?exibility while requiring less hardware, and being cost ef?cient.  In most work on video, the acquisition process has been largely separated from the analysis phases. However, if we think how humans capture videos, we try to position the cameras so as to get the best possible shots depending upon the goal. In large networks, it is infeasible to do this manually and hence it is necessary to develop automated strategies whereby the acquisition and analysis phases are tightly coupled. 

Alan E. Willner, University of Southern California
Flexible, Reconfigurable Capacity Output of High-Performance Optical Transceivers Enabling Cost-Efficient, Scalable Network Architectures

To meet the explosive capacity growth in future networks, it is crucial to advance the existing network architecture to support the dramatic increase in the transceiver data rates and the effective utilization of the network bandwidth. State-of-the-art transceivers provide 100-Gbit/s Ethernet per channel and fall short of the ultra-high rate of 1 Tbit/s to meet the expected growth. Two specific challenges remain: (a) the large discrepancy between high-rate and low-rate data channels, such that the large capacity of a single data channel from the transceiver will often not be required and be under-utilized, and (b) the large capital investment in ~Tbit/s terminal transceivers will often not be efficiently utilized. A laudable goal would be to create new transceivers and network architectures with extremely large capacity that can be tailored and shared to enable reconfigurable bandwidth allocation in future heterogeneous networks. We propose to develop new transceivers and flexible and scalable network architectures by using optical nonlinearities to uniquely encode the amplitude/phase of the data and to channelize the data into multiple channels for the efficient capacity utilization of the future network architecture. We will determine the performance trade-offs and potential network architecture gains and explore methods for flexible use of data rates up to 100 GBaud and high data constellations up to 256-QAM levels on two polarization states.

Constantine Dovrolis, Georgia Institute of Technology, College of Computing
A novel traffic shaping mechanism for adaptive video streaming

During the course of a previous Cisco-funded research project, we discovered that there is a fundamental problem with adaptive video streaming applications. When more than one adaptive video players share the same bottleneck, the players can get into an oscillatory behavior in which they keep requesting different video bitrates. This behavior can also cause significant unfairness between players, as well as inefficient bandwidth utilization at the bottleneck. Further, this problem cannot be resolved simply by modifying the player's adaptation logic. The main objective of the proposed research is to solve the previous problem in a way that will ensure the stability, fairness and efficiency of adaptive video streaming. The specific mechanism that we plan to investigate is an adaptive and probabilistic traffic shaper employed at the video servers. We will also investigate the interactions between the proposed shaping mechanism and various existing or proposed rate adaptation algorithms at the client.

Weidong Xiang, University of Michigan, Dearborn
Prototype for Wireless Access in Vehicular Environments (WAVE) Systems

Wireless access in vehicular environments (WAVE) is the most state-of-the-art dedicated short-range communications (DSRC) technology, which is proposed to provide vehicles with high-speed vehicle-to-vehicle and vehicle-to-roadside wireless links. WAVE systems employ advanced orthogonal frequency-division multiplexing (OFDM) modulation scheme and can achieve data rates of 6-27Mbs/s. WAVE systems operate in the spectrum band of 5.850-5.925GHz assigned by the Federal Communication Commission (FCC) and a roadside unit (RSU) can cover a range of up to 1000 feet. There are three main types of applications for WAVE systems: safety enhancement, traffic management and communications. Some examples are adaptive traffic light control, traffic jam alarm, collision avoidance, toll collection, vehicle safety services, travel information and commerce transactions. WAVE systems not only facilitate the implementation of intelligent transportation system (ITS) but also provide vehicles with Internet access for information transmission. The Department of Transportation (DOT) plans to install a large number of RSUs in the main roads and highways of United States to make the WAVE services available. WAVE systems have emerged as a complement as well as a competitor to existing cellular mobile communications systems by offering high-speed wireless access services with small latencies. The ABI Company, a technology market research predictor, has announced huge WAVE manufacturing and service market in the near future. WAVE generates a fresh information technology industry on vehicle basis and will be deployed worldwide. The magnitude and degree of the impacts of WAVE systems on the economy, society and our daily life are substantial, multi-layered and profound. This proposal seeks for the support from CISCO Research to complete a prototype for WAVE systems consisting four onboard units (OBUs) and two RSUs within one year. The objective of this proposal is to develop and realize a full functional real-time wireless access for vehicular environments (WAVE) prototype based on the IEEE 802.11p standard. Upon completion, the prototype will become the first, or one of the first functional WAVE prototypes from both academia and industry. With the help of the prototype, in this project the PI will present 1) a 5.9GHz mobile channel model based on extensive experiments, which will provide a solid foundation for the theoretical study and engineering of WAVE systems; and 2) a WAVE system simulator, which will be significantly beneficial to the research and development of WAVE systems. Such prototypes can be used as WAVE development kits for research and development, OBUs/RSUs for building up WAVE testbed, reference models for WAVE application specific integration circuits (ASIC) chip design.

João Barros, Instituto de Telecomunicações, Universidade do Porto
PredictEV – Mobile Sensing and Reliable Prediction of Electric Vehicle Charging in Smart Grids

The PredictEV project will deliver key technical advancements and a proof-of-concept for smart grid sub-systems capable of exploiting the real-time data provided by ICT-enabled electrical vehicles (EVs) and city-scale mobile sensing to predict where, when and how charging will take place in urban power systems. The project shall leverage a city-scale mobile sensor network supported by a IEEE 802.11p and IPv6 compliant vehicular ad-hoc network with 500 GPS enabled nodes that is currently under deployment in Porto, Portugal. This urban scanner is expected to deliver several gigabytes of data per day, which will be processed by a cloud computing facility to support low-carbon mobility and advanced grid management services. The focus of this project is on the distributed prediction and dissemination modules required for the efficient integration of EVs and charging stations as part of a complete smart grid solution that is embedded in the urban fabric. Using specific data sharing protocols to be developed in this project, each EV will be able to communicate through the existing vehicular network and keep the grid infrastructure informed about its position, battery status and other relevant parameters. Our modules shall fuse this data with the traffic values from the urban scanner and a-priori information on the EV's mobility patterns to deliver distributed, precise and localized predictions of the electricity demands at various points of the grid at any given moment in time. Such predictions can be used to manage distribution grids, while enabling the recommendation of strategic charging opportunities for individual EVs based on real-time information. The project shall explore the synergies of two teams hosted by the University of Porto, the Power Systems Group at INESC Porto and the Networking and Information Security Group at Instituto de Telecomunicações - Porto, who together have extensive expertise in vehicle-to-grid systems, distribution networks, intelligent transportation systems, communication protocols, security and information theory, among other areas. Both teams are also involved in and have access to the Portuguese electric mobility platform mobi.e, which plans to have 1350 charging stations already deployed by the end of 2011. Close cooperation is also ensured with Living PlanIT, a company currently planning to construct a smart city called PlanIT Valley in close vicinity to Porto. We believe PredictEV will contribute to a smarter electric mobility infrastructure in which utilities and city managers find it simpler to predict EV demands and mobile customers will enjoy reliable charging services that are readily available at affordable costs.

Grenville Armitage, Swinburne University of Technology
Using delay-gradient TCP to reduce the impact of "bufferbloat'" and lossy wireless paths

Traditional loss-based TCP congestion control has two significant problems -- it performs poorly across paths having non-congestion related packet losses, and it relies on filling and overflowing network buffers (queues) to trigger congestion feedback. Delay-based TCP algorithms utilise trends in RTT estimates to infer the onset of congestion along an end to end path. Such algorithms can optimise their transmission rates without inducing packet losses and keep induced queuing delays low. Delay-based TCP is thus attractive where a variety of loss-tolerant and loss-sensitive traffic flow over a mix of fixed and wireless link layer technologies. We believe practical delay-based TCP algorithms can help work around the emerging problem of excessively-large queues in deployed infrastructure ("bufferbloat"). However, past delay-based TCPs have been overly-sensitive to packet loss and insufficiently aggressive in the presence of loss-based TCP flows. Enabling delay-based TCPs to coexist with loss-based TCPs is a significant challenge to real-world deployment. Our previous work on delay-threshold and delay-gradient TCP congestion control has shown improved loss-tolerance and goodput in single-hop, intrinsically lossy link environments. We propose to evaluate more complex multi-hop multiple bottleneck scenarios (particular when exposed to interfering cross-traffic using NewReno, CUBIC and CompoundTCP algorithms), investigate the auto-tuning of relevant parameters and thresholds, and investigate the possibility of a hybrid delay-threshold and delay-gradient mechanism. Our algorithms will be publicly released as BSD-licensed patches to the FreeBSD networking stack, intended for real-world deployment.

Wei Mao, Computer Network Information Center, Chinese Academy of Sciences
Integrated IPSec Security Association Management with DHCP

IPSec [RFC4301] is pervasive in Customer Premises Networks (CPN) to secure the communication between the user host and the service provider of the Customer Premises Network, DNS recursive name server for instance. IPSec Security Association (SA) is a "connection" that provides security services to the traffic carried by it and details the key and algorithm that the IPSec connection relies on. A SA can be established with manual configuration by the administrator. However, the SA is IP address oriented and cannot function after the change of the involved IP addresses, which means extra SA parameters configuration is inevitable after renumbering the network or when the host is roaming. IKEv2 [RFC5996] eliminates manual reconfiguration. It employs two rounds of message exchanges. The first exchange of messages establishes a trust relationship between the two IKEv2 participants. This exchange uses two authentication methods, digital signature and pre-shared key. The former needs certificate management. If the roaming host is not configured with the trust anchor that the CPN uses, certificate-based IKEv2 cannot be realized. While the later still relies on preconfigured trust relationship and pre-shared keys are tied to particular IP addresses. Pre-shared key method cannot be used with mobile systems or systems that may be renumbered, unless the renumbering is within the previously determined range of IP addresses. Also, when pre-shared key is employed, IKEv2 computations cannot be offloaded to the attached hardware. Therefore, pre-shared key method fails to be a pure automatic one and does not go well with dynamic networking. DHCP [RFC2131 & RFC3315] allows a computer to be configured automatically, eliminating the need for intervention by a network administrator. Since DHCP is responsible for configuring the IP address and IPSec SA is IP address oriented, we propose to extend DHCP to integrate IPSec SA establishment course into the DHCP message exchange reinforced by other interaction, in order to dynamically configure SA parameters shared between the user host and the service provider of the Customer Premises Network.

Yuzhuo Fu, Shanghai Jiao Tong University
Virtualization for NoC fault tolerance

IC technology drives NoC(Network on Chip) to dominate in many-core communication. Another effect of shrinking feature size is that power supply voltage and device Vt decrease, and wires become unreliable because they are increasingly susceptible to noise sources such as crosstalk, coupling noise, soft errors, and process variation. Therefore, NoC fault tolerance, especially soft error, becomes a challenge for VLSI designers. In this proposal, from a new view we introduce virtualization technology to achieve these targets: 1) Meet different fault tolerance requirements through HW&SW cooperation on basic hardware platform; 2) Hide error detection, mask and recovery for effective error management; 3) Reduce performance loss or overhead from virtualization by certain HW auxiliary. Focusing on above goal, the complete FTVL (Fault Tolerance Virtualization Layer) is planned as a reliable middleware for NoC.

Sachin Katti, Stanford University
Full Duplex Wireless

We propose to research full duplex wireless, a novel technology with which radios can simultaneously receive and transmit on a single band. Our earliest prototypes required 3 antennas. Current ones require 2. The proposed research has three major topics: developing a single antenna design (and thereby enabling full duplex MIMO), algorithms for adaptively tuning the full duplex circuits, and using full duplex radios to build easy-to-design WiFi-direct devices and wireless relays.

Paulo S. L. M. Barreto, Escola Politecnica, University of Sao Paulo
Efficient Code-Based Cryptosystems (RFP-2011-050 Alternate Public Key Cryptosystems)

The two most widely used public key cryptosystems, RSA and ECC, can be easily broken by an attacker in possession of sufficiently large quantum computers, bringing about the need to look for other, quantum-resistant solutions. The so called post-quantum cryptosystems are most often based on syndrome decoding, lattices, or multivariate intractability assumptions. Among them, the McEliece and Niederreiter cryptosystems are two strong candidates to replace the current ones, both being based on syndrome decoding problems. One of the reasons these systems have not been extensively deployed is the substantial key size needed to ensure reasonable security levels. A careful choice of parameters makes these cryptosystems more attractive in practice. In particular, the use of quasi-dyadic (QD) Goppa codes enables reducing the key length by roughly two orders of magnitude compared to generic Goppa codes, and to improve efficiency for encrypting/decrypting from O(n^2) to O(n log n) for codes of length n. Families of codes less sensitive to structural attacks, like LDPC codes, are potential alternatives to QD Goppa codes, and raise the possibility of combining the best properties of both approaches in a possible family of QD-LDPC codes. The goal of this proposal is to achieve functional implementations of the Niederreiter cryptosystem (potentially also McEliece, although this is less efficient) in both software and hardware, with the highest possible throughput and lowest possible per-message bandwidth overhead for standard security levels (matching the NIST recommendations for symmetric encryption, namely, between 2^112 and 2^256). To this end, our approach will divide the problem into three lines of research: optimize the parameters for QD Goppa codes, efficiently implement the algorithms in software and hardware, and explore the properties of QD-LDPC codes for cryptographic purposes.

Li Chen, University of Saskatchewan
Reliability Assessment for Custom-Designed 3D-IC Test Chips

3D IC is a promising technology, however many aspects of this technology are still under heavy investigation. Reliability of TSV interconnects and interlayer bondings between the silicon layers becomes more complicated issue in 3D ICs due to the complexity of the architecture and miniaturized interconnect. Due to the high difference in Coefficient of Thermal Expansion (CTE) of Cu and Si, the Cu-TSV induces stress on its surroundings, potentially leading to reliability problems. The objectives for this proposal are to develop custom-designed 3D-IC test chips and then use them as test vehicles to study the overall 3D package-level performance in terms of reliability and testability. The 3D IC technology will be provided by Tezzaron through CMC and MOSIS. The custom-designed test chip will include various test structures, such as strain sensors, temperature sensors, test circuits for electrical parameter evaluation. We will assess the electrical performance of the 3D-IC structures, such as ESD, speed, noise coupling and power consumption. We will also evaluate the reliability issues in 3D-IC structures using x-ray microdiffraction and Raman experimental methods. The results from the project can directly benefit the academic research and industry community by clarifying the uncertainty in the 3D IC technologies and providing guidelines in 3D IC implementation. It will not only demonstrate Cisco's visibility in this area, but also provide financial support to our university research group. 

Grenville Armitage, Swinburne University of Technology
Multipath TCP for FreeBSD

This project has two major goals: (a) to create an initial BSD-licensed & public implementation of Multipath TCP in FreeBSD, and (b) utilise this implementation to explore the potential benefits of adding delay-based congestion control (CC) techniques to manage MPTCP subflows. The latter will include exploring the possibility that mixing loss-based and delay-based CC on MPTCP subflows within the same MPTCP connection might create a more robust ability to spread load across heterogeneous parallel paths.

Salvatore Pontarelli, University of Rome "Tor Vergata"
Bloom Filters for Error Detection and Correction in Binary/Ternary CAM

The aim of this proposal is to develop a system level approach for the detection and correction of errors induced by Single event Upset (SEU) in Binary and Ternary Content Addressable Memories (CAM/TCAM). Our approach combines the use of Bloom filters (BF) and of error detection codes to achieve a per-access based error detection procedure. One of the major drawbacks of BF is the presence of false positive events, which signal the presence of an item even if the item is not in the filter. An error in a CAM/TCAM acts in the opposite way than a false positive: when an error occurs in the CAM/TCAM, corrupting a certain word, it responds with a false negative if this word is queried. The underlying idea of this research is that CAM/TCAM and the BF check each other, detecting false positive of the BF with the CAM/TCAM result and SEU occurring in the CAM/TCAM with the BF result. For a binary CAM the comparison of its output with a single BF could be sufficient. Instead, for TCAM we use a set of Bloom filters arranged similarly to what previously proposed in literature. Each BF composing the set of BFs is programmed with only a part of the entire word to be matched. A priority-like policy allows selecting the output to be compared with the TCAM output. Since the comparison with the TCAM allows detecting false positives in the set of BFs the scheme we propose is simpler than other previously proposed schemes. In particular higher false positive rates can be tolerated; hence the requirements in terms of memory resources and of low number of memory accesses. Summarizing, the proposed approach does not require any modification to the internal structure of the TCAM, requires a limited overhead in terms of power and logic resources and enables error detection times significantly lower than those achievable with scrubbing based methods. Data provided by CISCO, in terms of size and operating frequency of used CAM/TCAM, and application scenarios, will be used to study realistic use cases.

Mahesh Marina, The University of Edinburgh
Using Smartphones for Understanding Wireless Coverage and Interference

Propagation and interference characteristics of wireless networks (e.g.,WiFi and 3G) significantly affect user perceived performance, but are hard to predict a priori, especially in indoor and dense urban environments. Fine-grained spatio-temporal monitoring of such networks will provide insight into the coverage and interference patterns that can in turn guide their online optimization. We propose a smartphone based wireless monitoring system by viewing the various network interfaces on a smartphone as sensors and exploiting the inherent mobility of users carrying smartphones for low cost yet detailed monitoring. This work will address various research issues that arise including localization, time synchronization, adaptive sampling and upload rates, self-calibration and adaptive channel hopping.

Ian Lane, Carnegie Mellon University
A Hybrid Cloud-based GPU-CPU Engine for Real-Time Large Vocabulary Continuous Speech Recognition

Highly accurate real-time Large Vocabulary Continuous Speech Recognition (LVCSR) remains a significant challenge for speech recognition research. While high accuracy can be obtained when using multi-pass offline Automatic Speech Recognition (ASR) systems, performance degrades significantly when low-latency real-time operation is required. To realize real-time performance, sacrifices that significantly affect recognition accuracy are made. These include: the use of smaller vocabularies, smaller and less accuracy language and acoustic models during decoding and limited speaker adaptation. In this work we propose to develop the technology required to bridge this gap, bringing the performance of state-of-the-art offline speech recognition to real-time systems. The resulting technology will be the foundation for a new generation of speech applications: including natural, user and task adaptive voice user interfaces for smart devices (mobile phones, televisions and vehicles), real-time subtitling for videos and video conferencing, real-time transcriptions of teleconference calls, and supportive personalized agents that monitor and support conversations as they happen. 

Elyse Rosenbaum, University of Illinois at Urbana-Champaign
ESD Reliability of 3D-IC

The I/O circuits internal to a 3D-IC have reduced loading due to the elimination of package-related parasitics; this provides a higher I/O bandwidth and improved signal integrity. Although 3D integration carries performance and power benefits, it can also introduce new reliability hazards. This project will address the ESD hazards in 3D-IC. The project on ESD reliability of 3D-IC consists of the following tasks. 1. Investigate requirements for ESD protection at inter-die signal interfaces with a three-fold objective: (i) minimize yield loss due to ESD during die-to-die bonding; (ii) pass component-level CDM qualification; (iii) maximize the bandwidth of this interface. 2. Demonstrate ESD solutions for 3D-IC. 3. Further build the CDM-ESD simulation capability for 3D-IC.

Alan C Bovik, University of Texas
Objective Measurement of Stereoscopic Visual Quality II

There is a recent and ongoing explosion in 3D stereoscopic displays including cinema, TV, gaming, and mobile video. The ability to estimate the perceptual quality of the delivered 3D stereoscopic experience is quite important to ensure high Quality-of-Service (QoS). Towards this end we propose to develop principled metrics for 3D stereoscopic image quality, create quantitative models of the effects of distortions on the perception of 3D image quality, and develop Stereo Image Quality Assessment (IQA) algorithms that can accurately predict the subjective experience of stereoscopic visual quality. To achieve this we will conduct psychometric studies on a recently constructed database of high-resolution stereo images and distorted versions of these images. We have precision lidar ground truth for these stereo pairs. Based on these studies, which will reveal perceptual masking principles involving depth, we will develop methods for estimating the degradation in quality experienced when viewing distorted stereoscopic images. By combining quantitative 3D perception models with new statistical models of naturalistic 3D scenes that we have discovered under an NSF grant, including the bivariate statistics of luminance and depth in real-world 3D scenes, we shall create a new breed of perceptually relevant, statistically robust Stereo IQA algorithms. We envision that our Stereo IQA algorithms will achieve a much higher degree of predictive capability than any existing methods. More importantly, we will develop Stereo IQA algorithms that are blind or No Reference, which, to date, remains a completely unsolved problem. In Year 2, we are furthering our work with a unique series of psychometric studies of human perception of distortions in 3D, and in particular of 3D masking principles in distorted stereoscopic signals. Pursuant to these developments, we are developing video aware cooperation strategies for stereoscopic video streaming in heterogeneous networks.

Gengsheng Chen, Fudan University
System Level Vulnerability Factor and Logic Fault Sensitivity of Embedded CPU on FPGA

In this proposal, we propose to study system level vulnerability factor and logic fault sensitivity of an embedded microprocessor (CPU) implemented by FPGA. We will map an open-source CPU to a Xilinx-like FPGA developed internally at our group. While Xilinx does not (and probably will never) provide API to inject fault to a specific C-bit for logic and interconnect, a fault can be injected to the exact C-bit of our FPGA. Therefore, we can (1) obtain the exact system level vulnerability factor and the logic level fault sensitivity for a given C-bit and given workload; (2) evaluate the effectiveness of logic and system level enhancement methods for soft error tolerance by applying them to the targeted CPU; and (3) develop new CPU IP blocks that apply logic enhancement and are more soft error tolerant. The above (1)-(3) will be the primary deliverables of the proposed research. To validate practical applicability of the proposed research, the target CPU (before and after soft-error tolerance enhancement) will be mapped to Xilinx FPGA and be tested for tolerance of soft errors using the existing (random based) fault inject mechanisms. This validation is the last part of the deliverables of the proposed research.  In a whole, our proposed research work aims to be beneficiary to not only Cisco's relating products but also to other industrial and academic researches which use FPGA for their circuit and system implementation.

Li Chen, University of Saskatchewan
SET Study in an DC-DC Converter Chip

The proposal is to study the single event transient (SET) effects in a DC-DC converter chip. The device was identified to be less sensitive to alpha particles compared to another one from the same manufacture which failed in qualification tests. By studying the SET performance using pulsed laser, schematic and 3D-TCAD simulations, Cisco believed that it could be beneficial for industry. In order to let industry aware of the issue, understand the underline physics, mechanism and hence to improve the future design, they decided to follow up by moving it into a fundamental research with my university group. Our research group has the capabilities of laser measurement equipment, circuit level simulation, 3D-TCAD simulation, and most importantly, we have the established expertise in this area. We would like to combine the pulse laser and simulation tools to conduct the research. We expect to publish several papers in the next 1-2 years. It will demonstrate Cisco's visibility in this area and also provide my research group content and funding for graduate students. 

Dawn Song, University of California, Berkeley 
New Techniques for Detecting Zero-Day Exploits and APTs

Enterprises are increasingly challenged by Advanced Persistent Threats (APTs) attempting to infiltrate their networks by any means possible. In contrast to previous generations of threats, such attackers often custom-build attacks for a single target, so we need defensive mechanisms that work for previously unknown vulnerabilities (zero-day attacks) and cannot be easily evaded. At the same time, an attack detection system must have low overhead and be easy to deploy in production or offline analysis systems. Drawing on our deep experience with binary analysis, we propose to address this challenge using static binary rewriting to add dynamic attack detection to an off-the-shelf executable. The rewritten program will detect exploits on previously unknown vulnerabilities. By performing the rewriting ahead of time and using a state-of-the-art binary analysis framework, we can precisely target the dynamic checks as well as reduce runtime overheads. The rewritten binary can then be used in cloud-based automatic analysis systems or even in certain production systems to detect zero-day exploits and APTs.

Jun Fan, Missouri University of Science and Technology
Measurement-Calibrated Modeling and Early-Stage Design Exploration for 3D ICs

Compared to single-chip integration, 3D integration and 2.5D integration using through-silicon via (TSV) and silicon interposer offer shorter interconnect, smaller footprint, and heterogeneous integration at lower cost. Accurate modeling of TSV and interposes is critical for signal/power/thermal integrity in 3D or 2.5D integration. Measurement-calibrated, compacted equivalent circuit models for TSVs and interposer, especially for TSV arrays and for TSVs used by high-speed signaling will be developed in this proposed work, together with the accurate exploration of (i) die-to-die (D2D) integration and signaling scheme at the early design stage and (ii) physical design rules prior to any specific physical design. 

Raj Jain,
Washington University  in Saint Louis
openSDN: Open Service Delivery Network Architecture

Cloud computing provides unique opportunities for ASPs to manage and optimize their distributed computing resources. We propose to develop a open service delivery network architecture (openSDN) which can be used by networking equipment companies, like Cisco, to develop openSDN routers that will allow ASPs to customize their own network or ISP networks connecting multiple cloud computing facilities belonging to multiple cloud providers, enterprise networks, and data centers. It provides a novel network configuration primitive called rule-based delegation that allows an ASP to configure the network without having to provide full control of application level logic or data to the third party networking service providers. These delegation rules may include rules for how to select the instances among multiple replicas of the server, how to load-balance among different instances, what to do under network and instance failure (due to security attack or hardware malfunction), etc. openSDN cleanly separates control and data planes. Control plane provides a secure interface for registration and distribution of specific rules, and then, translates these rules into forwarding rules for the data plane, which enforces these rules. All openSDN-aware entities have names, IDs and locators that help maintain multiple instances of services and allow the services to move transparently inside a cloud or among many clouds. An innovative naming system is used to resolve entity names to IDs. Applications of openSDN include Private WANs, datacenter networks, private/public/hybrid clouds, telecom services, online services, distributed application networks, science, educational and research computing, and defense networks. The proposed architecture is evolutionary in the sense that it can coexist and is backward compatible with the current Internet and can be deployed incrementally with a small number of new routers.

Omprakash Gnawali, University of Houston
RPL’s Performance Under Dynamics at Different Time Scales

RPL is an IPv6 protocol that runs at the edge of the smart-grid infrastructure to interact with the end-user devices. In this project, we propose to study the performance of RPL under wireless dynamics. Specifically, we seek the answer these questions. Under what wireless network dynamics will RPL perform optimally? What networking conditions will challenge the performance of RPL? What configurations and supporting mechanisms will enable RPL to perform optimally in a wide range of environments? 

Arun Venkataramani, University of Massachusetts Amherst
P3: Mobile Content Distribution in a Multi-Technology Wireless World

This proposal is motivated by the tension between the explosive growth in mobile data usage and capacity limitations imposed by spectrum availability. Our position is that there will be an increased dependence on multiple wireless access technologies such as cellular as well as shorter-range technologies such as WiFi. Although these heterogeneous access technologies as well as requirements of mobile applications bring forth unique challenges, today's network and transport protocols largely continue to be a legacy of the wired Internet. To address this disparity, we propose to conduct research on the design and implementation of P3, a mobile content delivery system that seeks to jointly optimize performance, price, and power consumption (the three P's) characteristics across multiple access technologies so as to enhance user-perceived quality of mobile applications. The proposed work is informed by our recent measurement studies in vehicular settings comparing 3G and WiFi with respect to the three P's. We plan to extend the measurement studies to 4G settings and, based on these findings, design and implement content delivery strategies that significantly improve the user-perceived mobile application experience.

Liang Cai, Zhejiang University
Research Proposal of Testing Methodology for Cloud Based Application Sharing Services

While evaluating the Quality of Experience (or QoE) under different network conditions has become a basic and challenging problem in the field of application sharing service, little effort or progress have been made by academia and industry. This proposal therefore proposes an approach to evaluate the quality of application sharing services. This approach shall consist of a novel rating model and an objective evaluation method based on the knowledge of human visual systems (HVS) for application sharing services QoE and take into account human perceptual reactions. Our proposed approach will take WebEx QoE evaluation as a case study. This method will dramatically improve efficiency of QoE evaluations.  

Amin Vahdat, University of California, San Diego
A Virtualized Service Infrastructure for Global Data Delivery

This project will investigate novel models for delivering application functionality and virtualized storage across an ensemble of cloud data centers spread across the Internet. The work proposes naming data objects as the fundamental mechanism for naming resources, rather than the traditional approach of employing IP addresses. On top of this base abstraction, we will investigate two inter-locking issues. First, we will investigate various consistency models for the underlying data. Given limited reliability of individual Internet paths, we argue that data replication across multiple data centers is fundamental to achieving high levels of availability (five nines). In the face of such replication, a critical question is the model---and the required network support---for delivering a range of consistency semantics to applications and clients making use of the data. Second, we will investigate API's and implementation techniques for layering a range of data structures on top of the underlying object store. For instance, we will explore the ability to explore replicated, consistent queues, trees, hash tables, graphs, etc. as the fundamental building block for application construction and communication.

Rolf Stadler, KTH Royal Institute of Technology
Network Search for Network Management

Our goal is to engineer a network search functionality that can be understood as "googling network data in real-time." For instance, the search query "find information about the flow <src, dest, app>" would return a list of systems that carry the flow in their caches, the interfaces the flow uses, the list of firewalls the flow traverses, the list of systems that impose traffic policies on this flow, etc., and possibly even the flow's current QoS metrics. Network search is complementary to monitoring, whereby state information and monitoring variables are not explicitly addressed by network location but characterized as content. The idea is that the search will be performed on static and dynamic data, on current as well historical information. To achieve scalability and fast completion times for search on operational data, network search has to be performed "inside the network," supported through a decentralized, self-organizing architecture. We envision that the capability of network search will enable novel and powerful network management functions. First, it will spur the development of  new tools that will assist human administrators in network supervision, maintenance and fault diagnosis. Second, we believe that a new class of network control and service functions will emerge which will use a network search engine.

Larry Smarr, University of California, San Diego
Integration of Cisco Telepresence with OptIPortals

Cisco TelePresence provides corporations with a complete, high-quality high-definition video teleconferencing system. It consists of one-to-three panel high-definition displays, with an additional screen for PowerPoint slides, and uses excellent proprietary technology for codecs and runs over telecom provided dedicated DS3 circuits. Cisco TelePresence, however, does not provide academics or government and corporate decision makers with combined video teleconferencing and advanced data displays that cover entire walls, that provide advanced interactive capabilities, and that run over the National LambdaRail (NLR). We plan to integrate Cisco TelePresence with the NSF-funded OptIPuter software and display technology, to prototype a system for digital collaboration among researchers in universities, government, and industry that can be rapidly expanded to data-intensive researchers at campuses connected to NLR. A key motivation for this initiative would be to provide a prototype that will be oriented to requirements that must be met in the next few years. The current TelePresence model is based on common services and technology readily available today at almost all fairly well connected locations. The OptIPuter reflects emerging advanced capabilities that can be implemented at selected sites using specialized technologies and configurations. This proposed project would demonstrate both how the OptIPuter techniques can significantly enhance the current TelePresence environment and how this environment can be extended to meet emerging future requirements -- reflecting capabilities anticipated in the next few years. Such capabilities will include much higher capacity, performance, and quality of communication services.

Kenneth E. Goodson, Stanford University
Power and Thermal Engineering of Microprocessors & ASICS in Router Systems

Router and switching systems face mounting challenges relating to power consumption, noise, and reliability.  Power consumption is comprised of the delivery of high power to drive devices and an almost equal amount of heat to be removed.  Much progress can be made by leveraging the improved thermal management technologies developed for the electronics industry over the past decade, which may enable significant reductions in fan power consumption and noise at projected chip power generation levels.  This work quantifies these potential benefits by developing electrothermal simulations of chips and chip networks and by helping CISCO develop a systematic chip temperature measurement approach.  In a parallel effort, this work will examine the power distribution network in CISCO network chips and study techniques to modulate and optimize the delivery of energy. The proposed work will be a joint collaboration between the Mechanical Engineering and Electrical Engineering Departments of Stanford University.

Chava Vijaya Saradhi, Center of REsearch And Telecommunication Experimentation for NETworked communities (Create-Net)
Dynamic Mesh Protection/Restoration (Protectoration) in Impairment and 3R Regenerator Aware GMPLS-Based WDM Optical Networks

Recent evidence by Sprint [1] suggests that multiple simultaneous component failures (or frequent fiber cuts due to construction activity in emerging countries and varying repair times) in IPoDWDM networks are more common than expected; in fact, as many as 6 simultaneous component failures were detected. Given the growing network size and the increasing economical impact of failures to meet pre-agreed and stringent SLAs, network operators cannot rely on simple single-link failure models. Henceforth, availability/reliability analysis based on multiple failure models has started to attract significant attention. In addition, the constraints due to various optical component technologies that are deployed in the transport backbone networks (e.g. fixed OADM, 2-degree ROADMs, N-degree ROADMS, Y-cables, protection switching modules [PSMs]) [2] and constraints on optical reach due to physical layer impairments (PLIs) [3, 4] and switching constraints due to fixed or colorless & omni-directional components; availability of regenerators [5, 6] and tunable transponders [2] need to be considered by the dynamic protection/restoration algorithms, without which it is impossible to realize dynamically reconfigurable and restorable translucent optical networks. Most of the work in the literature assumes ideal and homogeneous physical layer components/links and their characteristics; and hence these algorithms can not be applied directly to the production networks. In this project, we plan to develop novel dynamic single/multiple link/node-failure recovery techniques based on the co-ordination of management and control planes in the context of GMPLS-based WDM optical networks considering the availability of different optical components including 3R regenerators/tunable transponders and their constraints together with the limitations on the optical reach due to PLIs, by leveraging the work we have pursued and experience we have gained through several related past projects [3-6].

Krishnendu Chakrabarty, Duke University
Fault-Isolation Technology for System Availability and Diagnostic Coverage Measurement

The proposed work is on optimized fault-insertion (FI) technology for understanding the impact of physical defects and intermittent/transient faults on chip, board, and system operation of electronic products. It includes theoretical research on optimization, statistical inference, self-learning theory, etc., for fault diagnosis, as well as evaluation using public-domain benchmarks. Research results will be disseminated through conference and journal publications.

Massimiliano Pala, Dartmouth College
Portable PKI System Interface for Internet Enabled Devices and Operating Systems

In a 2003 analysis of a PKI survey OASIS identified that the three most urgent obstacles for PKI wide adoption are: the high cost of PKI setup and management, usability issues (both at user and management levels), and the lack of support for PKI technology in applications. In particular, a major issue when developing applications is the absence of a standard, well-designed, and easy-to-use API for PKI operations. Subsequent work from OASIS in 2005 showed that not much has changed since 2003. Today, developing PKI-enabled applications is still difficult. When it comes to using digital certificates in network devices like routers or access points, integrating PKIs functionalities is even harder because of limited capabilities or complex programming interfaces. Thus, providing a usable interface for the final user (i.e., the network administrators) is extremely difficult. This project will address PKI usability and complexity issues by development of a standard API and an associated abstraction. More specifically we will focus on portability and usability of PKI applications. In order to achieve portability, we will standardize the API specifications by writing a new Internet Draft. This public document will allow other developers, cryptographic libraries providers, or Operating System vendors to implement our API across different systems (eg., computers, network equipment, portable devices, sensors, etc.). Regarding usability, we will focus on making PKI operations easy for the developer. We will leverage our extensive experience (both in the academic and in the "real" world) in order to raise the "usability-bar" of PKIs and lower software development costs associated with digital certificates usage. Project Objectives: We propose to study, design and implement an highly usable API for PKI operations. More specifically we will investigate, develop and standardize a new API specification that will enable developers to easily integrate and/or implement PKI functionalities in  Applications and Operating Systems. In contrast to existing all-purpose cryptographic libraries, our work will focus on providing an abstraction layer capable of integrating existing protocols (e.g., SCEP, PRQP) into simple high-level PKI-specific calls. In particular, we will simplify the  interactions between End-Entities and Certification Authorities and how to integrate authorization information processing in secure communication between peers. By analogy, this project will provide the same benefits in portability and usability as the POSIX interface provided for modern Operating Systems. Expected Outcomes: As a result, our work will help to lower costs associated with the adoption of digital certificates and public key cryptography for authentication purposes, especially Internet-connected devices. Moreover this work will allow PKI adopters to focus more on the applications themselves, where security benefits and return on investment are easier to demonstrate.

Raymond Yeung, The Chinese University of Hong Kong
Two Problems in Network Coding

We propose to work on two problems in network coding.  The first problem is a fundamental problem that extends the theory of network coding from block code and convolutional code to variable-length code.  This extension has the potential of combining source coding and network coding for joint optimization.  The second problem is to study the distributed scheduling and replication strategies of a peer-to-peer system for content distribution, including coding based strategies.  The result should be useful for content distribution, whether streaming or VoD, which is projected to be the predominate usage of network bandwidth in the coming years.

Qiang Xu, The Chinese University of Hong Kong
Physical Defect Test for NTF with Minimized Test Efforts

No Trouble Found (NTF) devices refer to those devices that are returned from system customers as having failed in-field but refuse to fail with a retest on automatic test equipment (ATE). Such phenomenon is one of the most serious challenges for test engineers because it is very difficult, if not impossible, to identify the root cause for the problems of such devices without being able to repeat the failure. To tackle this problem, in this project, we plan to investigate the fundamental reasons for the occurrence of NTF phenomenon and develop novel layout-aware testing techniques that enable the circuit under test (CUT) to behave as the worst-case operational states in its functional mode. By doing so, we can mimic the behavior of the CUT close to its functional operations at ATE, thus dramatically reducing the NTF rates for returned defective devices.

Jacob A. Abraham, The University of Texas at Austin
Mapping Component-Level Models to the System and Application Levels

Complexity is the primary stumbling block to the e®ective analysis of computer systems. Thus, analyzing complex systems, including Systems-on-Chip (SoCs), for veri¯cation, test generation or power consumption is exacerbated by the increasing complexity of integrated circuits. In addition, technology trends are increasing the complexity of the basic component models in a system. The result is that most tools can only deal with relatively small components in the design. This proposal is based on our novel techniques for mapping module-level constraints to the system and application levels. We have applied them to enable at-speed tests which target desired faults in a module (such as stuck-at faults or small delay defects), and which can be applied from the system-level inputs without relying on additional on-chip circuitry. Recent research has used the ideas to ¯nd functionally valid system-level vectors which produce peak power in an embedded module. These techniques have been applied to instruction-set processors, and will be extended to enable other targets, such as application-speci¯c systems, to be analyzed e®ectively. Other problems addressed will include abstractions to manage the complexity of test generation, measures of coverage, and architectural support for the mapping process.

James Martin, Clemson University
Broadcasting Video Content in Dense 802.11g Sports and Entertainment Venues

 Wireless communications is changing how humans live and interact.  Evidence of this evolution is observed by the widespread deployment and usage of 802.11, 802.16, and 3G systems.  Multimodal devices such as the iPhone and the Droid are unifying 802.11 and 3G networks.  A particularly interesting application domain that demonstrates the rapid developments in wireless technology is sports and entertainment. Video, such as live broadcast video, is considered to be the application that most engages users (fans or patrons) in a sports and entertainment venue. Deploying this capability at a stadium to support hundreds to thousands of users is challenging.  A promising approach for 'robust' delivery of video content to a large number of wireless stations is to multicast the data in conjunction with application level forward error correction (AP-FEC) based on Raptor codes.  The proposed research will provide theoretical and simulation-based results that will help the academic community better understand the feasibility, limitations, challenges and possible solutions surrounding large-scale deployment of video broadcast in extremely dense 802.11 deployments when using modern handheld devices and state-of-the-art application coding strategies.

Priya Narasimhan, Carnegie Mellon University
YinzCam: Large-Scale Wifi-Based Mobile Streaming in Sports Venues

Yinz Cam is the first-of-its-kind indoor wireless service that targets live video streaming to the wifi-enabled cellphones of a high-density, often-nomadic, large user population. Designed to enhance the fan's viewing experience at sporting events, Yinz Cam allows many (potentially thousands of) fans at the event to (1) select from, and view, high-quality live video feeds from different camera angles in real-time, where the cameras are positioned at various strategic points in the arena, e.g., the net-cam in ice hockey, the line-judge cam in football, (2) view action replays from the current event and previous/related events, and (3) participate, in real-time, in any trivia games or ongoing live polls during and after the event. The research challenges involve scalable, high-quality, real-time, interactive video and multimedia delivery over a wireless network within the arena, to a large number of handheld mobile devices or touch-screens placed throughout the arena. Yinz Cam has been successfully piloted since November 2008 within Mellon Arena, the home of the Pittsburgh Penguins, an NHL professional hockey team. Mellon Arena seats 17,000+ fans, and the Pittsburgh Penguins have offered the PI's research team (within Carnegie Mellon's new Mobility Research Center) the unprecedented opportunity of deploying the Yinz Cam technology live at Mellon Arena, and offering it to fans throughout the 2008-9 and 2009-10 hockey seasons. Yinz Cam has already been tested at 60+ Penguins home games in 2009, with multiple phone models (Nokia, Samsung, Android, IPod/IPhone, Blackberry), with multiple live, broadcast-quality video-camera angles, and has received press coverage and support from the Pittsburgh Penguins. The purpose was to validate the technology sufficiently with real users so as to deploy Yinz Cam permanently at full scale in the Consol Energy Center, the new arena for the Pittsburgh Penguins as of Fall 2010. The research collaboration with Cisco will allow us to jointly explore the issues underlying this large-scale, high quality delivery of video, to gather data on the usage, performance, quality-of-service and reliability issues of such a large-scale deployment, and to explore how Yinz Cam can be extended with location-aware and social networking services to further heighten a fan's experience at a game. More importantly, because of Yinz Cam's real-world deployment, this project will also provide Cisco with an invaluable opportunity to (1) discover and analyze the usage patterns and interests of mobile video users at sporting events, (2) understand the issues involved in supporting infrastructural and wireless-network performance and scalability for high-quality video, and (3) push the limits of current Cisco technologies to support such interactive, user-centered, mobile video applications. The data from Yinz Cam's deployment, the solutions that we innovate, and the lessons learned, will have both business value and research impact.

John Kubiatowicz, University of California, Berkeley 
CISCO Affiliation for the Berkeley Parallel Computing Laboratory:  Reinventing Systems Software and Hardware for ManyCore systems

We propose to make CISCO an affiliate of the Berkeley Parallel Computing Research Laboratory (ParLAB). The primary charter of the ParLAB is to reinvent parallel computing in an era in which manycore processors will be increasingly prevalent-both in servers and client machines. The ParLAB approach involves an innovative combination of new software engineering techniques, better parallel algorithms, automatic tuning, new operating systems structures, and hardware support. The focus of ParLAB research lands in the center of many of CISCO's interests, including the structuring of software and hardware aspects of Internet routing. We introduce the two-level partition model and the Tessellation OS.

Lu Yu, Zhejiang University
Moving Object Detection Based Adaptive 3D Filtering and Video Coding Optimization

Noise in video image could always be introduced by camera. If ordinary camera is used in a normal unlighted office environment, noise will be more significant and affect coding efficiency more. Filtering the video might improve coding efficiency but will also introduce detail loss. Intelligent filtering to ensure high coding efficiency as well as good visual quality is valuable for video conferencing systems, especially when high resolution and high quality are required. In this proposal, we try to optimize video coding by intelligently using different de-noising filters and coding parameters according to the detection of moving objects and static background. This project will focus on: (1) Moving objects detection and tracking, including knowledge-based object detection and tracking; (2) Content adaptive 3-dimentional filtering; (3) Optimized video coding strategy, including mode selection and rate control.

Balaji Prabhakar, Leland Stanford Junior University
Stanford Experimental Data Center Lab

The Stanford Experimental Data Center Lab has a broad research agenda, spanning networking, high speed storage and energy efficiency. In networking, a central project is to design and build a prototype of a large (hundreds of ports), low-latency, high bandwidth interconnection fabric. Other major projects include rapid and reliable data delivery, and software-defined networking. Storage is viewed as existing largely, if not entirely, in DRAM servers; this is the focus of the RAMCloud project. The overall goal is to combine the RAMCloud project and the interconnection fabric project so that applications which access data can do so with a latency less than 2 to 3 microseconds. The projects of the lab involve many faculty and students at Stanford as well as Tom Edsell and Silvano Gai at Cisco Systems.

Mario Gerla, University of California, Los Angeles
Vehicular Network and Green-Mesh Routers

Vehicle communications are becoming a reality, driven by navigation safety requirements and by the investments of car manufacturers and Public Transport Authorities. As a consequence many of the essential vehicle grid components (radios, Access Points, spectrum, standards, etc.) will soon be in place enabling a broad gamut of applications ranging from safe, efficient driving to news, entertainment, mobile network games and civic defense. One of the important areas where this technology can have a decisive impact is traffic congestion relief and urban pollution mitigation in dense urban settings. Vehicles are the main offenders in terms of chemical and noise pollution. It seems fair that some of the research directed to vehicular communications be used to neutralize the negative effects of vehicular traffic. In this proposal we develop a combined communications architecture strategy for the control of urban congestion and pollution (the two components being closely related to each other). Vehicle to vehicles communications alone will not be sufficient to solve these problems. The infrastructure will play a major role in this process. In particular, the fight of traffic congestion and pollution will require a concerted effort involving car drivers, Transport Authority, Communications Providers and Navigator Companies. A key role in bringing these players together is played by the Green-Mesh, the wireless mesh network that leverages roadside Access Points and interconnects vehicles to the wired Infrastructure. In this project, we will design the Green-Mesh architecture that can deliver key functionalities, from safe navigation to intelligent transport, pollution monitoring and traffic enforcement. 

Colin Perkins, University of Glasgow
Architecture and Protocols for Massively Scalable IP Media Streaming

The architecture and protocols for massively scalable IP media streaming are evolving in an ad-hoc manner from multicast UDP to multi-unicast TCP. This shift has advantages in terms of control and deployability over the existing network, but costs in terms of increased server infrastructure, energy consumption, and bandwidth usage. We propose to study the nature of the transport protocol and distribution model for IPTV systems, considering the optimal trade-off for services that are scalable and economical while still supporting trick-play.  Specifically, we will consider novel approaches to enhancing TCP streaming, as compared with managed IP multicast services. We will evaluate different transport protocols and distribution models for IPTV systems, to understand their applicability and constraints, and make recommendations for their use. Thereafter, we will develop router based enhancements to TCP-based streaming, borrowing ideas from publish/subscribe and peer-to-peer overlays, to bring its scalability closer to that of multicast IPTV systems.

Sharon Goldberg, Boston University
Partial Deployments of Secure Interdomain Routing Protocols

For over a decade, researchers have been designing protocols to correct for the many security flaws in the Border Gateway Protocol (BGP), the Internet's interdomain routing protocol.  While there is a widespread consensus that the Internet is long-overdue for a real deployment of one of these security protocols, there has been surprisingly little progress in this direction.   The proposal argues a major bottleneck to the deployment of secure interdomain routing protocols stems from issues of partial deployment. Since the Internet is a large, decentralized system, any new deployment of a secure interdomain routing protocol will necessarily undergo a period where some networks deploy it, but others do not.   Thus, the challenge we address in this proposal is twofold; firstly, we look for a rigorous metric to understand the security properties of a protocol when it is partially deployed, and secondly, we model the economic incentives an AS has to deploy a secure routing protocol when it is partially deployed in the system.  Our objective is to address an issue that we believe is crucial for both Cisco, and the Internet at large; namely, to develop plans for adopting secure routing protocols in ways that are consistent with both the security requirements and the economics of the interdomain routing system.

Leonard Cimini, University of Delaware
MU-MIMO Beamforming for Interference Mitigation in Multiuser IEEE 802.11 Networks

The proposed work is a continuation of a successful project, initially focused on IEEE 802.11n beamforming (BF). Previously, we made significant progress in understanding fundamental issues in MIMO BF. Much of this work has been reported at several conferences and in three journal papers ready for submission. In the coming year, we propose to focus on interference mitigation for 802.11ac and for a radio-enabled backhaul network.  The enhanced capabilities promised by 802.11ac come with new challenges, in particular, the task of effectively managing the time, frequency, and spatial resources.  In addition to multiple antennas with multiple spatial streams, 802.11ac will permit a multiuser mode.  We will focus on transmit BF approaches, combined with receiver-based processing, to mitigate interference and maximize network performance. We also propose to build on our previous investigation of radio-based backhaul for a network of 802.11n (or 802.11ac) APs, with the goal of delivering a set of guidelines for the design of the radio backhaul network.

Robbert VanRenesse, Cornell University
Fault-Tolerant TCP Project

We propose to design, build, and evaluate Reliable TCP (RTCP), a service that supports TCP connections that persist through end-point failure and recovery.  Various projects have addressed this problem before.  Unfortunately, most are written for a client-server type of scenario in which the server is replicated, and do not support transparent recovery of client failures.  RTCP takes a radically different approach from prior work, leading to various salient features * RTCP if fully symmetric, supporting the recovery of both endpoints; * RTCP requires no modifications to the endpoints -- they run any operating system kernel and TCP stack of choice; * RTCP buffers no application data; * RTCP has little or no effect on throughput, and under common circumstances has lower latency cost than standard replication approaches.

E. William Clymer, Rochester Institute of Technology
A Preliminary Investigation of CISCO Technologies and Access Solutions for Deaf and Hard-of-Hearing

Faculty from the National Technical Institute for the Deaf (NTID), one of the eight colleges of Rochester Institute of Technology (RIT), with colleagues from other RIT colleges, will provide a team of experts to consider the application and adaptation of the ways in which CISCO products can benefit communication access for deaf and hard-of-hearing individuals. Three research strands are proposed: * State of the Art and Recommendations Related to 911-411-211 Telephone Response Systems * Evaluate Possible Use of Avatars to Enhance Direct Communication Support for Deaf and Hard-of-Hearing Users * Evaluate TelePresence Technologies for Face-to-Face and Remote Communication for Deaf and Hard-of-Hearing Users This project is a multi-year effort. During the first year, faculty at NTID and RIT will form teams, review the literature and state-of-the-art on CISCO access technologies, set up and test the various CISCO technologies in RIT/NTID laboratories, conduct focus groups and community demonstrations, gather and analyze expert opinion, and prepare whitepapers for each strand, including recommendations for future efforts in these areas. After the first year of work, the team will review progress and findings and determine which current or possibly new research would be most fruitful, and will make recommendations for continued research and development to CISCO. The common thread among the three proposed strands will be the identification of local RIT experts in related fields to refine and focus their expertise on the specific telecommunication challenges facing deaf and hard-of-hearing persons. With RIT/NTID faculty expertise in deafness, access technologies, computers, engineering, software development and business and collaboration with appropriate individuals at CISCO, this specialized knowledge will be used to eventually evaluate CISCO technologies for potential enhancement and adaptation for the unique deaf consumer. RIT offers a unique setting for research of this nature, as over 1600 deaf students study and live on a campus with over 16,000 hearing students. Additionally, expertise from within the other seven colleges of RIT (including Applied Science and Technology, Business, Computing and Information Sciences, Engineering, and Science) will augment expertise in the field of deafness and access technologies of NTID to consider how existing and emerging CISCO technologies can ultimately be better deployed to enhance the communication situation for deaf and hard-of-hearing persons.

Mohammad Tehranipoor, University of Connecticut
Effective Reliability and Variability Analysis of Sub-45nm Designs for Improving Yield and Product Quality

As technology scales to 45nm and below, reliability and variability are becoming the dominant design and test challenges to the semiconductor and EDA industry. Current timing analysis methodologies are not able to effectively address reliability and variability since they use foundry data as the basis for selecting critical paths which may not effectively represent process parameters for different designs with different structures. It is vitally important to have the capability of extracting variations at gates and interconnects placed at various metal layers as well as extracting aging data from silicon. Such extraction can be done on a selected number of dies from various wafers and lots. This information can be used to identify critical reliability and variability paths at time zero when the design is being timing closed and tested. Such variations will also be used for statistical analysis for reliability as well. New design strategies and effective guardbanding are developed to improve device performance, yield at production test, self healing and calibration, reliability in the field, and product quality as technology scales. Our main objectives in this project are: (1) sensitivity extraction and aging data collection using low-cost and accurate on-chip sensors, (2) identify critical paths to test performance of the chip during production test (time zero) considering process variations, and (3) identify paths that will become critical at time 't' in the field for performing accurate guardbanding and testing during manufacturing test and in the field using low-cost on-line test structures. This project also helps perform better performance verification, Fmax analysis, and model improvement.

Henning Schulzrinne, Columbia University
Mobile Disruption-Tolerant Networks

With the growing number of mobile devices equipped with wireless interfaces, mobile users increasingly find themselves in different types of networking environments. These networking environments, ranging from globally connected networks such as cellular networks or the Internet to the entirely disconnected networks of stand-alone mobile devices, impose different forms of connectivity. We investigate store-carry-forward networks that rely on user mobility to bridge gaps in wireless coverage ("disruption-tolerant networks"). The performance of routing protocols and mobile applications in such networks depends on the mobility behavior of human users. Using real-world GPS traces, we plan to investigate models that allow us to predict the future movement of data carriers, thus helping to greatly improve the performance of disruption tolerant networks.  The results are also likely to be useful for planning WiMax and 3G/4G networks.

Xi Li, Embedded System Lab, Suzhou Institution For Advanced Study of University of Science and Technology of China
Malware Detection for Smartphones

The goal of our project is to implement a malware detection system for smartphones. The system can detect malware accurately and efficiently while has a less resource use to adapt to the resource limitation of smartphones. Our research focus on the design and implementation of malware detection system which includes: 1)Security mechanism and risk of specific smartphone platform; 2)Behavior features of malware and normal software; 3)Algorithm of malware detection; 4)Solution of the resource limitation of smartphones.

Ben Zhao, University of California, Santa Barbara
Understanding and Using the Human Network to Improve the Internet Experience: An Accurate Model for Degree Distributions in Social Networks

With the availability of empirical data from online social networks, researchers are finally now able to validate models of social behavior using data from real OSN systems.  We find that the much accepted and relied on Power Law model provides a poor fit for degree distributions. More importantly, this poor fit results in a severe over-estimate of the presence of supernodes in the network, and provides an erroneous basis for researchers designing heuristics to NP-hard problems on large graphs. We propose a new statistical model that provides a significantly improved fit for social graphs. We will derive both statistical properties and a generative method for this model, and will use data from our study of over 10 million Facebook profiles to validate our approach.  Finally, we will revisit and revise solutions to key problems that based their heuristics on faulty assumptions of supernodes in the graph, including influence maximization, shortest path computation, and community detection.

Constantine Dovrolis, Georgia Institute of Technology, College of Computing
Robust Video Streaming over IP: Adaptive Transport and Its Instrumentation

Today, most Over-The-Top (OTT) video streaming services use HTTP/TCP. TCP however, with its flow and congestion control mechanisms as well as the strict reliability requirement, is far from ideal for video streaming. On the other hand, adaptive video streaming  over UDP has several desirable features: the ability to control the transmission rate in small timescales (without the constraints of window-based flow or congestion control), selective (or optional) retransmissions, additional protection for certain frames using Forward-Error-Correction (FEC), and network measurements using video packets. The main premise of the proposed research is that all these desirable features of UDP-based video streaming can also be gained using TCP, as long as the video streaming application deploys certain mechanisms that we will design, prototype and evaluate in this research.

Robert Daasch, Portland State University
Semiconductor Device Quality and Reliability

This proposal seeks funding support to improve the manufacture of Cisco products.  The proposal explores strategies to collect and analyze the test data from divergent sources (e.g. ASIC suppliers, contract manufacturers and Cisco). The results of the work are new ways to direct the test flow of application specific integrated circuits (ASICs), reduce the frequency of costly chip failures that appear late in the manufacturing flow and enhance the coordination between Cisco and its suppliers.  The proposed strategy reintegrates the data from component manufacturer wafer-sort and final-test to end-use system diagnostic tests (SDTs) into a Virtual Integrated Device Manufacturing (VIDM) environment.   The proposal presents new concepts such as the golden rejects as standards to be leveraged by the VIDM to control the integrated circuit manufacturing process.

Kirill Levchenko, University of California, San Diego
Identifying Malicious URLs Using Machine Learning

We propose to identify malicious URLs entirely based on the URL alone. The motivation is to provide inherently better coverage than blacklisting-based approaches (e.g., correctly predicting the status of new sites) while avoiding the client-side overhead and risk of approaches that analyze Web content on demand. We intend to apply Machine Learning techniques classify sites based only on their the lexical and domain name features. The major advantage of this approach is that, in addition to being safer for the user, classifying a URL with a trained model is a lightweight operation compared to first downloading the page and then using its contents for classification. In addition to evaluating this approach, we also intend to tackle issues of practical deployment, namely scalability, and the complexity of updating the features.

Steve Renals, The University of Edinburgh
Robust Acoustic Segmentation for Conversational Speech Recognition

Robustness across different acoustic environments and different task domains is a key research challenge for large vocabulary conversational speech recognition.  Within the AMI and AMIDA projects we have developed a speech recognition system (AMI-ASR), primarily targeted at multiparty meetings, which has demonstrated state-of-the-art accuracy and close to real-time performance.  We have recently tested this system on a variety of speech recordings (including podcasts, recorded lectures and broadcast TV programmes).  The results indicated that the core acoustic and language models remain accurate across these different environments and domains (although obviously domain specific training/adaptation data helps).  However, a large number of the observed errors come from problems in the acoustic segmentation module.  These errors are primarily caused by segments of non-speech audio being decoded as speech by the ASR system. In this project we plan to develop acoustic segmentation modules that are robust across domains and across acoustic environments, and which can also operate with low latency.  Our approach will be to develop methods that combine multiple classifiers and acoustic features. During the project we will experiment on a variety of data including meeting recordings from a variety of sources, podcasts and lectures.  The final evaluation of the system will be in terms of ASR error rate, and will use the project development data as well as any ``surprise'' data that Cisco may wish to provide.

Xuehai Zhou, Embedded System Lab. , Suzhou Institution For Advanced Study of University of Science and Technology of China
Mobile Cloud Computing

 Along with the rapid development of smart phones and wireless network, we believe more and more services will emerge in mobile device. However, providing these services by the traditional service infrastrucuture is often difficult, or even impossible. In this respect, adapting current cloud service platform to mobile devices will be a solution for this problem. This project is aimed at finding the difficulties in current cloud platform when facing mobile service, evaluates these problems, and designs new structures or algorithmes to solve them. We will finally design a system framwork --- a new mobile service infrastructure, that will add some functionalities to deliver or enable new services and applications on mobile device.

Sujit Dey, University of California, San Diego
Adaptive Wireless Multimedia Cloud Computing to Enable Rich Internet Gaming on Mobile Devices

With the evolution of smart mobile devices, and their capability and rapid adoption for Internet access, there is a growing desire to enable rich multimedia applications on these mobile devices, like 3D gaming and augmented reality, internet and social networking video, and enterprise applications like tele-medicine and tele-education. However, despite the progress in the capabilities of mobile devices, there will be a widening gap with the growing computing/power requirements of emerging Internet multimedia applications, like multi-player Internet gaming. We propose a Wireless Multimedia Cloud Computing approach, in which the computing needs of the rich multimedia applications will be performed not by the mobile devices, but by multimedia computing platforms embedded in the wireless network. We will address wireless network scalability and Quality of Experience achieved, besides computing scalability and energy efficiency, to make the proposed approach user acceptable and economically viable. Specifically, we will have to consider the challenges of the wireless network, like bandwidth fluctuations, latency, and packet loss, while satisfying stringent QoS requirements of real-time and interactive multimedia applications, like very low response times needed in multi-player gaming. Besides, we will have to ensure very high scalability in terms of the number of mobile users that can be supported simultaneously, which can be challenging considering the high computation needs multimedia tasks like 3D rendering can impose on the wireless cloud computing resources, and the high bandwidth requirements of delivering multimedia data from the wireless cloud servers to the mobile devices. To address the above challenges, we propose a new Wireless Cloud Scheduler, that will supplement current wireless network scheduling by simultaneously considering both the computing and network bandwidth requirements of the multimedia applications, and the current availabilities of the cloud computing resources and conditions of the wireless network. To ensure high mobile user experience, we will analyze how wireless network and cloud computing factors affect user experience, and develop a model to quantitatively measure the quality of experience in Wireless Multimedia Cloud Computing. We will develop adaptation techniques for multimedia applications like 3D rendering, augmented reality, and video encoding, which can enable up to 10 times reduction in the computing and network bandwidth requirements of the multimedia applications, while still ensuring acceptable mobile user experience. The resulting adaptive applications residing in the wireless cloud, and the user experience models, will be utilized by the wireless cloud scheduler to ensure high network and computing scalability, while enabling high mobile user experience. While the above approaches can enable a range of multimedia applications on mobile devices, in this proposal we will particularly focus on Cloud Mobile Gaming (CMG): enabling rich, multi-player Internet games on mobile devices. We will develop a research prototype of the CMG architecture, including a mobile gaming user experience measurement tool, application adaptation techniques, and a wireless cloud scheduler for mobile gaming. We will demonstrate the feasibility and scalability of the CMG approach by playing representative Internet games on mobile devices and netbooks on WiFi, 3G, and 4G wireless networks, with enhanced gaming user experience.

Li Xiong, Emory University
Enabling Privacy for Data Federation Services

There is an increasing need for organizations to built collaborative information networks using data federation services across distributed, autonomous, and possibly private databases containing personal information.  Two important privacy and security constraints need to be considered: 1) the privacy of individuals or data subjects who need to protect their sensitive information, and 2) the security of data providers who need to protect their data or data ownership.  A fundamental research question is: what models correctly capture the privacy risk in the data federation setting?  Simply combining the privacy models in single provider settings and security notions in secure multi-party computation settings is not sufficient in addressing the potential privacy risks for data federations.  In addition, few works have taken a systems approach for developing privacy-enabled data federation infrastructure. The objective of this proposal is to develop a privacy-preserving data federation framework that includes: 1) rigorous and practical privacy notions and mechanisms for privacy protection of data across multiple data providers, 2) scalable and extensible architecture and middleware that can be easily deployed for privacy-preserving data federations.  The proposed research will reexamine the state-of-the-art privacy notions based on both adversarial privacy and differential privacy in the distributed setting, adapt and design new models and efficient algorithms to address potential risks.  The research will build upon the PI's previous work on next-generation distributed mediator-based data federation architecture and secure query processing protocols.  The intent is to demonstrate the viability of integrating the state-of-art privacy protection and secure computation technology components into an advanced and extensible infrastructure framework for seamless privacy preserving data federation.  The PI and her students have unique qualifications with expertise and research experience in both data privacy and security and distributed computing to successfully complete the project.   

Riccardo Sisto, Politecnico di Torino
Fast and memory-efficient regular expression matching

Regular expression matching is known to be a crucial task in many packet processing applications. Especially when performing deep packet inspection, the need arises for matching in real-time more and more complex expressions. Intrusion detection systems and e-mail spam filters are among the most significant examples of applications highly based on regex matching of considerable complexity. Thanks to the efficiency of the Deterministic Finite Automata (DFA) model in terms of execution time on purely sequential machines and limited usage of bandwidth toward main memory, all the most recent software approaches for regex matching focus on representations derived from DFA, although enriched with different techniques in order to limit the well-known state explosion problem of DFA in case of complex patterns. In this project we propose an orthogonal approach which is based on Non-deterministic Finite Automata (NFA) , which are well-known for well-behaving in terms of memory occupancy even with very complex pattern sets. The main drawback of directly using the NFA form for regex matching stands in the need to explore multiple paths in parallel in order to guarantee an adequate processing speed. The leading idea of this project is that processing architectures with an high degree of parallelism and very high memory bandwidths could be exploited in order to efficiently match regex directly on their NFA representation, thus avoiding memory explosion. This project aims at validating this idea by implementing an NFA-based pattern matching software engine based on processors with architectures characterized by high parallelism and abundant memory bandwidth, such as modern Graphics Processing Units or other massive multicore architectures with such characteristics. Finally, a mainstream architecture such as the Intel x86 will also be evaluated, by investigating whether its special vector-based instruction set (e.g. SSE) is suitable for executing NFA at high speed on commodity hardware.

Joy Zhang, Carnegie Mellon University
Behavior Modeling for Human Network

The future usage of mobile devices and services will be highly personalized. The advancement of sensor technology makes context-aware and people-centric computing more promising than ever before. Users will incorporate these new technologies into their daily lives, and the way they use new devices and services will reflect their personality and lifestyle. This opportunity opens up the door for novel paradigms such as behavior-aware protocols, services and applications. Such applications and services will look into user behavior and leverage the underlying patterns in user activity to adapt their operations and have the potential to work more efficiently and suit the real needs of the users. While recording all mobile sensor information is trivial, effectively understand, infer and predict user's behavior from the heterogenous sensory data still poses challenges for researchers and industry professionals. We propose a novel approach to the study of human behavior. By converting the heterogenous sensory data into a one dimensional behavior language representation, we apply statistical natural language processing algorithms to index, retrieve, cluster, summarize and to infer high level meanings of the recorded multi-sensor information about users' behavior. We will also investigate using social networks to infer the social behavior of certain user groups to tailor services for these groups. This research will advance our understanding of human behavior and will provide a framework that can be used to guide both future research and engineering in human centered computing.

Ramesh Govindan, University of Southern California
Towards Predictive Control for Grid-Friendly Smart Buildings

Commercial and residential buildings consume the largest portion of electricity generated in the U.S.  Supply-side methods to deal with increasing peak loads requires the addition of new power plants, reduced efficiency at off-peak hours, and higher energy costs.  A promising alternative approach to managing peak demand electricity consumption is demand response and demand-side management, which can attenuate the peak load on the electricity power grid and power generation.  This can slow the rate of required infrastructure investment, improve efficiency and keep costs manageable.  Therefore, there is great interest in developing demand response control systems to reduce the peak relative to the base load.  This proposal seeks to explore the interplay between two technologies in order to achieve efficient demand-side management: spatially-dense near real-time environmental measurements provided by a wireless sensor network, and a model-based predictive control (MPC) system which uses a building model is derived from these measurements.

Edward Knightly, William Marsh Rice University
Robust High-Performance Enterprise Wireless: Inference, Management, and Cooperation

Our completed project yielded three key outcomes: (i) expanded deployment of the TFA Rice Wireless programmable urban WiFi network to a 3 km2 area and a user population exceeding 4,000, (ii) an analysis and solution to the multi-hop wireless routing problem, and (iii) an analysis and solution to the problem of congestion control over imbalanced wireless channels. We propose the following project for the next research phase. First, we will develop a network inference and management toolkit to address network diagnostics and to answer "what-if" queries. To achieve this, we will correlate easy-to-collect average statistics at each node into an inference framework that can disentangle complex interactions among nodes. In particular, we will use on-line measurements to assess how topology, channel conditions, and traffic demands have yielded the current network state and use a combination of on-line measurements, protocol rules, and analytical models to develop highly realistic predictions of how to move the network from an under-performing network state to one that meets the network manager's performance objectives. Second, we will design and implement a MAC protocol that enables multi-node cooperation to form a robust "virtual MIMO" array. We have already demonstrated in-lab that two nodes (such as a user's laptop and WiFi phone) can employ symbol-level cooperation to obtain a dramatic increase in channel robustness. We will explore protocol design for dense enterprise deployments and develop techniques to automatically ensure high robustness and resilience to fading and interference for management traffic and priority traffic. Throughout this project, we will employ an experimental research methodology utilizing both our in-building custom FPGA platform and our outdoor urban WiFi deployment.

Bernd Girod, Stanford University
Robust Video Streaming over IP: Adaptive Transport and Its Instrumentation

Typical IPTV solutions today combine multicast with local retransmissions of IP packets from a Retransmission Server (RS). The RS also supports fast channel change through unicast bursts that provide rapid synchronization of key information and supply an initial video stream. Building upon our ongoing collaboration with Cisco in this area, we propose to investigate advanced video transcoding schemes for fast channel change and error repair. We are particularly interested in the optimal placement of random access points in the transcoded bitstream and multicast-stream-aware transcoding that minimizes quality degradations from splicing of unicast and multicast streams.

George Porter, University of California, San Diego
Efficient Data-intensive Scalable Computing

Data-intensive Scalable Computing (DISC) is an increasingly important approach to solving web-scale problems faced by computing systems that answer queries based on hundreds of TB of input data. The purpose of these systems can be broad-ranging, from biological applications that look for subsequence matches within long DNA strands to scientific applications for radio astronomers that process deep space radio telescope data.  The key to achieving this scale of computing is harnessing a large number of individual commodity nodes.  Two key unaddressed challenges to building and deploying DISC systems at massive scale is per-node {\em efficiency} and cross-resource {\em balance} as the system scales.  While DISC systems like  Hadoop scale to hundreds or thousands of nodes, they typically do so by sacrificing CPU, storage, and especially networking efficiency, frequently resulting in unbalanced systems in which some resources are largely unused (such as the network).  The goal of this proposal is to design DISC systems that focus on efficiency as a first-class requirement.  We seek to develop programming models and tools to build {\em balanced systems} in which the key resources of networking, computing, and storage are maximally utilized. Since real systems will exhibit heterogeneity across nodes, we will explore techniques to flexibly deploy parts of the application accordingly.  Lastly, we will provide fault tolerance while maintaining efficient use of computing and storage resources.  This work will initially be set in the context of building TritonSort, a staged, pipeline-driven dataflow processing system designed to efficiently sort the vast quantities of data present in the GraySort benchmark challenge.

Faramarz Fekri, Georgia Tech
Robust Video Streaming over IP: Adaptive Transport and Its Instrumentation

The objective of this research is the adaptive streaming of ubiquitous video to consumers with access to a variety of platforms including the television, PC, and mobile handset over network paths with diverse characteristics with the goal of providing a high quality seamless experience. With the emergence of multimedia applications, demand for live and on-demand video services and real-time multi-point applications over IP has increased. Viewers access the video from various devices and networks with varying conditions. Since multicasting, in general, requires the commercial provider's support, we have to adopt unicast on the access network. However, transferring large volumes of live content (TV) via redundant unicast transmission of the same content to multiple receivers is inefficient. One alternative is to use multicast inside the core, but do unicast on the access network. This proposal presents an adaptive scheme for layered video transport to viewers with the flexibility to access the stream from various devices and heterogeneous networks. Our solution applies modern elastic FEC to H.264/SVC at the source enabling partial transport of video layers in a new architecture involving distributed delivery nodes (caches). The use of caching is to enable scaling the system when multicasting to large number of viewers is not an option. This new architecture for live content transport is inspired by a non-trivial advantage in using the scalable video stream in conjunction with elastic FEC in a distributed cache environment. It has the potential to solve for the challenging problem of video streaming which is easy-to-implement and easy-to-deploy and at the same time adapts to the specific needs of heterogeneous viewers, solves the issue of packet loss and congestion, accommodates both live content transport and on-demand video unicast and, especially, requires less bandwidth for the same quality-of-service in live content delivery to populations. The new approach offers other benefits such as robustness, fairness, bandwidth saving, load balancing, and coexistence with inelastic flows that we are interested in investigating further.

Gerald Friedland, International Computer Science Institute
Multimodal Speaker Diarization

Speaker diarization is defined as the task of automatically determining ``who spoke when", given an audio file and no prior knowledge of any kind other than the binary format of the sound file. The problem has been investigated extensively in the speech community and NIST has conducted evaluations in order to measure the accuracy of diarization for the purpose of Rich Transcription, i.e. speaker diarization as a front-end to ASR. Recent results, however, have shown that speaker diarization is useful for many other applications, especially for video navigation. However, current speaker diarization systems seem not robust enough to be used for the navigation of arbitrary video content, as they require intensive domain-specific parameter tuning (discussions are presented in). Traditionally, however, the speaker diarization has been investigated only using methods from speech processing and visual cues have rarely been taken into account. Initial results, indicate that adding the visual domain can increase the accuracy and robustness of speaker diarization systems. We therefore propose to conduct systematic research on how audiovisual methods may improve the accuracy and robustness of speaker diarization systems.

Alan Bovik, University of Texas
Objective Measurement of Stereoscopic Visual Quality

There is a recent and ongoing explosion in 3D stereoscopic displays including cinema, TV, gaming, and mobile video. The ability to estimate the perceptual quality of the delivered 3D stereoscopic experience is quite important to ensure high Quality-of-Service (QoS). Towards this end we propose to develop principled metrics for 3D stereoscopic image quality, create quantitative models of the effects of distortions on the perception of 3D image quality, and develop Stereo Image Quality Assessment (IQA) algorithms that can accurately predict the subjective experience of stereoscopic visual quality. To achieve this we will conduct psychometric studies on a recently constructed database of high-resolution stereo images and distorted versions of these images. We have precision lidar ground truth for these stereo pairs. Based on these studies, which will reveal perceptual masking principles involving depth, we will develop methods for estimating the degradation in quality experienced when viewing distorted stereoscopic images. By combining quantitative 3D perception models with new statistical models of naturalistic 3D scenes that we have discovered under an NSF grant, including the bivariate statistics of luminance and depth in real-world 3D scenes, we shall create a new breed of perceptually relevant, statistically robust Stereo IQA algorithms. We envision that our Stereo IQA algorithms will achieve a much higher degree of predictive capability than any existing methods. More importantly, we will develop Stereo IQA algorithms that are blind or No Reference, which, to date, remains a completely unsolved problem.

Xavi Masip, Universitat Politecnica de Catalunya
Sponsorship for the 9th International Conference on Wired/Wireless Internet Communications (WWIC'11)

The 9th International Conference on Wired/Wireless Internet Communications (WWIC2011)will be organized by the Technical University of Catalonia (UPC) in Barcelona, Spain, next June 2011. The conference is traditionally sponsored by Ericsson and Nokia and even in the 2006 edition was sponsored by Cisco as well. Cisco is participating in the conference serving in the TPC and as keynote speaker. Being a WWIC patron, Cisco will benefit from wide visibility in conference fliers, CFP, news, conference proceedings, etc, as well as from 2 free registrations and a keynote talk on an open topic.

Robert Beverly, Naval Postgraduate School
Transport-Layer Identification of Botnets and Malicious Traffic

Botnets, collections of compromised hosts operating under common control, typically for nefarious purposes, are a scourge. Modern botnets are increasingly sophisticated and economically and politically motivated. Despite commercial defenses and research efforts, botnets continue to impart both direct and indirect damage upon users, service providers, and the Internet at large. We propose a unique approach to detecting and mitigating botnet activity within the network core via transport-level (e.g. TCP) traffic signal analysis. Our key insight is that local botnet behavior manifests remotely as a discriminative signal. Rather than examining easily forged and abundant IP addresses, or attempting to hone-in on command-and-control or content signatures, our proof-of-concept technique is distinct from current practice and research. Using statistical signal characterization methods, we believe we can exploit botnets' basic requirement to source large amounts of data, be it attacks, spam or other malicious traffic. The resulting traffic signal provides a difficult-to-subvert discriminator. Our initial prototype [3] validates this framework as applied to a common botnet activity: abusive electronic mail. Abusive email imparts significant impact on both providers [2] and users. Not only did this prototype yield highly promising results without reputation (e.g. blacklists, clustering, etc.) or content (e.g. Bayesian filters) analysis, it has several important benefits. First, because our technique is IP address and content agnostic, it is privacy-preserving and therefore may be run in the network core. Second, in-core operation can stanch malicious traffic before it saturates access links. Third, by exploiting the root-signature of the attacks, the traffic itself, botnet operators must either acquire more nodes or send traffic more slowly, either of which imparts an economic cost on the botnet. The fundamental technical approach of our prototype, treating the traffic signal as a lowest-level primitive, without relying on brittle heuristics or reputation measures, has the potential to generalize to many security contexts. We propose to investigate using transport-traffic analysis in new contexts and in conjunction with additional data to perform multi-layer correlation. Of particular interest is the dual of the abusive traffic problem, identifying the scam and botnet hosting infrastructure to avert phishing and malware propagation attacks. Longer-term, we believe the method can benefit service providers by detecting and reacting automatically to third-party congestion experienced by their customers, whether induced by faults, overloads, or attacks.

Molly O'Neal, Georgia Tech
GTRNOC Convergence Innovation Competition

The Convergence Innovation Competition (CIC) is an annual event produced by the GT-RNOC each Spring. This unique competition is open to all Georgia Tech students and focuses on innovation in the areas of Converged Services, Converged Media, Converged Networks, and Converged Client and Server platforms and environments. This includes the convergence of services (ex: maps + blog + phone), service delivery platforms (eg: Internet+IMS), access technologies (3G + WiFi + IPTV), and client platforms (eg: smartphone + set top box + laptop) as well as smart grids, smart homes, and connected vehicles. Participants are supported by GT-RNOC research assistants who are domain experts in a variety of relevant areas, and are given access to a wide variety of resources contained in the Convergence Innovation Platform.  The goal of this competition is to develop innovative convergence applications. While the primary focus is on working end-to-end prototypes, the applications and services developed also need to be commercially viable with a strong emphasis on the user experience.

Ananthanarayanan Chockalingam, Indian Institute of Science, Bangalore
Transmitter/Receiver Techniques for Large-MIMO Systems

This research proposal is aimed at investigating novel architectures and algorithms for large-MIMO encoding (at the transmitter) and large-MIMO detection and channel estimation (at the receiver) at practically affordable low complexities. The approach we propose to pursue to address the large-MIMO detection and channel estimation problem is to seek tools and algorithms rooted in machine learning/artificial intelligence (ML/AI). The motivation for this approach is that, interestingly, certain algorithms rooted in ML/AI (e.g., greedy local search, belief propagation) show increasingly closer to optimum performance for increasingly large number of dimensions at low complexities. The proposed research will investigate such algorithms for suitability and adoption in large-MIMO detection and channel estimation. Also, large-MIMO encoding techniques with no channel state information at the transmitter (CSIT) and partial/limited CSIT that can render the complexity at the receiver low would be investigated. The approach we propose to pursue to address the large-MIMO encoding at the transmitter is by way of designing space-time block codes (STBC) which are amenable for the detection techniques discussed in the above and/or designing STBCs that admit fast sphere decoding. Wireless mesh and WiFi networks (e.g., IEEE 802.11ac), where Cisco is a key player, can significantly benefit from the high spectral efficiency large-MIMO approach.

Yao Wang, Polytechnic Institute of New York University
Video Coding/Adaptation and Unequal Error Protection Considering Joint Impact of Temporal and Amplitude Resolution

Both the wired and wireless networks are highly dynamic with time and location varying bandwidth and reliability. A promising solution to tackle such variability is by using scalable video coding (SVC), so that the video rate can be adapted according to the sustainable bandwidth, and different video layers can be protected against packet loss according to their visual importance. A scalable video is typically coded into many spatial, temporal and amplitude (STA) layers. A critical issue in SVC adaptation is how to determine which layers to deliver, to maximize the perceptual quality, subject to the total rate constraint at the receiver. A related problem is how to allocate error protection bits over the chosen layers. To solve such problems, one must be able to predict the perceptual quality and bit rate of a video under any combination of spatial, temporal, and amplitude resolutions (STAR). (Note that each particular set of STA layers corresponds to a specific combination of STAR.) Although there has been much work in perceptual quality modeling of compressed video, most of them considered video coded at  fixed spatial and temporal resolutions. Similarly, prior work on rate modeling and rate control has mostly focused on the impact of quantization stepsize (QS) only. Most of the recent work in SVC adaptation involving frame rate and frame size is quite adhoc, not based on analytical quality and rate models. Recently we have developed accurate and yet relatively simple models relating the perceptual quality and rate, respectively, with the frame rate and QS. We are currently working on extending these models to consider spatial resolution as well. We further considered how to determine the optimal frame rate and QS to maximize the perceptual quality subject to the rate constraint for a single video. The same principle can be employed to optimize either real-time video encoding or adaptation of pre-coded SVC video, based on feedback about sustainable end-to-end bandwidth.  We propose to collaborate with CISCO researchers in the following related research areas: 1) Rate allocation and video adaptation among multiple videos to avoid congestion at bottleneck links in the Internet, 2) Unequal error protection across STA layers to optimize the received video quality under a total rate constraint, and 3) Wireless video multicast with fair and yet differentiated quality of service. All these work will make use of our quality and rate models, to optimize the perceptual quality (either at the single or multiple recipients) subject to a certain rate constraint.

Yuanyuan Zhou, University of California, San Diego
Fast Failure Diagnosis and Security Exploit Detection  via Automatic Log Analysis and Enhancement

Complex software systems inevitably have complex failure modes: errors triggered by some combination of latent software bugs, environmental conditions, administrative errors and even malicious attacks by exploiting software vulnerabilities. Very often, log files are the sole data source available for vendors to diagnose reported failures or analyzing security exploits. Unfortunately, it is usually difficult to  conduct deep inference manually from massive amount of log messages. Additionally, many log messages in real-world software are ad-hoc and imprecise, and provide very limited diagnostic information, still leaving a large amount of uncertainty regarding what may have happened during a failed execution or an attack. To enable developers to quickly troubleshoot software failures and shorten system downtime, we propose automatic log inference and informative logging to make real-world software more diagnosable with regards to failures and security exploits.Specifically, our research consists of the following thrusts (the first two related to general failure diagnosis and the third to security): (1)Practical and scalable automatic log inference for troubleshooting production failures. We will investigate techniques and build a tool that combine innovations from program analysis and symbolic execution to analyze run-time logs and source code together to infer what must/may have happened during the failed execution. The inferred information can help software engineers significantly reduce the diagnosis time and quickly release patches to fix problems. (2)Automatic log enhancement to make software easy-to-diagnose. We will investigate ideas and design static analysis tools to automatically enhance log messages in software to collect more causally-related information for failure diagnosis.  Specifically, we will explore the following two steps to make logging more effective for diagnosis: (i)We will build a tool that can automatically enhance each existing log message in a given piece of software to collect additional causally-related information that programs should have captured when writing log messages. Such additional log information can significantly narrow down the amount of possible code paths for engineers to examine to pinpoint a failure's root cause. In brief, we enhance log content in a very specific fashion, using program analysis to identify which state should be captured at each log point (a logging statement in source code) to minimize causal ambiguity. (ii)We will explore a methodology and also build a tool to systematically analyzes logs' code coverage and identifies the best program locations in software code to collect intermediate diagnostic information in an informative (maximally removing diagnosing ambiguity) and efficient (low overhead) way. (3)Log-Inference and Log-Enhancement for security exploits detection and vulnerability analysis. As many crash failures (e.g. those caused by buffer overflows, etc) can be exploited by malicious users to launch attacks and hijack systems, quickly detecting them and identifying the vulnerable code locations would be very critical to ensure security.  Such vulnerabilities are usually different from other general failures since in most of cases programmers do not anticipate the crashes, so they wouldn't know to place logging statement at those faulty points. So the questions are: (1) How existing log messages can be leveraged to detect security exploits? and (2) how to do better logging to facilitate intrusion detection and vulnerability analysis? First, we can apply similar log-code inference to help detecting code vulnerability in case of crash failures (e.g. those introduced by buffer overflows, etc) to understand what code path must the execution taken, and what are the key variable values. Such information can help quickly narrowing down the root case and identifying vulnerable code locations. Also we can enhance existing log messages or add additional log messages to make intrusion detection and forensics analysis easier and faster. Another interesting idea to explore, unique for detecting security exploits, is to learn what are the normal logging message sequences (either logically by analyzing source code or statically by learning from testing runs), and then at run time, monitor for the stream of log messages to detect anomalies that deviates from the normal behaviors.  For example, from program logic, a "return" statement should return back to the function caller, so there is some natural order (or sequence) in log messages. If this return is hijacked, the program execution flow would not follow the natural logic order. Instead, it will jump into a different code location, and produce a totally different log message order. The challenge is then how to add logs without incurring large overhead.   One possible solution is to find minimum recording by eliminating those that can be inferred automatically. Also we can store it temporarily in memory and output it only periodically (e.g. every 10 seconds).  To save space, we can also compress old logs or keep only those high-level log summary.         

Nelson Morgan, The International Computer Science Institute
Manystream Automatic Speech Recognition

We will study the expansion of the speech recognition front end signal processing to many feature streams (using Gabor filtering and MLP feature transformation in the Tandem framework) in order to improve performance under common acoustic conditions. We will focus on experiments with single distant microphone data from meetings, using the roughly 200 hours of speech from ICSI, NIST and AMI corpora. If it is available, we will also want to incorporate data from Flip Video recordings from Cisco, which will provide a significant test of the generalization of our methods. An improved system will also be compared with the now-standard AFE approach (ETSI 2007), and with model adaptation methods, as well as with some combinations with some of these approaches. In particular, we will want to learn the tradeoffs between increasing the data size and model complexity vs. using increased computation for an expanded acoustic front end. This experimental work will be accompanied by an analytic component to learn what each of these systems is doing.

Faramarz Fekri, Georgia Tech
Understanding and Improving the Human Network (open call)

The goal of this research is to improve human networks in collaborative information exchange sites. Examples of such networks are Expert Systems, Community Question Answering and, in general, the expert-services social networks in which the members exchange information as a service and possibly make profits. These networks are becoming a popular way of seeking answers to information needs for millions of users. In such networks, the online transaction (i.e., information exchange) most likely takes place between individuals who have never interacted with each other. The consumer of the information often has insufficient information about the service quality of the service provider before the transaction. Hence, the service recipient should take a prior risk before receiving the actual service. This risk puts the recipient into an unprotected position since he has no opportunity to try the service before he receives it. This problem gives rise to the use of reputation systems in which reputations are determined by rules that evaluate the evidence generated by the past behavior of an entity within a protocol. Therefore, the quality of these social networks as well as the dynamic of their connectivity depend heavily on the reputation mechanism. This research proposes, for the first time, the application of the belief propagation in the design of reputation systems. It represents a reputation system as a factor graph, where the providers and consumers of information service are arranged as two sets of variable and factors nodes. Then, it introduces a fully iterative message passing algorithm based on belief propagation to solve for reputations. The preliminary results indicate the superiority of the belief propagation scheme both in terms of computational efficiency and robustness against (intentional/unintentional) malicious behaviors. Thus, the proposed reputation system has the potential of improving both the human-centric performance and ad-hoc information-centric human networks.

Bharat Bhuva, Vanderbilt University
Logic Designs for In-Field Repair and Failure mitigation

Arrayed circuit designs (SRAMs or FPGAs) use redundant circuits to repair failed cells in the field.  Unlike SRAM designs, where all cells are identical and adding spare cells is not difficult, logic circuits do not have easily identifiable designs that can be added as universal spares.   For ASIC designs, the brute-force approach of duplicating entire logic blocks is too expensive and impractical for implementation.  This proposal is for the development of after-the-fabrication, in-field repair approaches for logic circuits that allow for the replacement of faulty transistors (or logic gates) without affecting the overall functionality of the IC.  The proposed approach is to use over- and under-approximations of the original logic circuit that can provide redundant minterms and maxterms that can be introduced in the circuit to prevent failed transistors from affecting the circuit operation.  The approximations must be designed such that they provide maximum coverage and incur minimal overhead.  We propose to develop algorithms and design techniques that can be integrated into the existing design flow for Cisco and their vendors to facilitate in-field repair of logic circuits.

Mark Zwolinski, University of Southampton
Logic In-Field Repair Research

This proposal is made in response to RFP-2010-063. Conventional fault-tolerance implies replication of hardware and thus overheads of 100% or worse. Regular fabrics, such as FPGAs, may allow reconfiguration, but such schemes commonly omit reference to the control and monitoring mechanisms. In earlier work, we have proposed a totally self-checking controller architecture. In this proposal, we intend to investigate whether such a structure can be used with a much more limited form of error checking in order to manage such errors. In particular, we wish to extend our scheme of self-checking controllers to monitor the state of condition signals that communicate the state of a datapath to the controller. By verifying the effectiveness of a limited monitoring scheme, we will also seek to discover if a more comprehensive fault-monitoring technique can also be used. Hence, this would allow a designer to make an informed decision about the degree of self-checking and repair to include in a system.

Adam Dunkels, Swedish Institute of Computer Science
IP Based Wired/Wireless Sensor/Actuator Networks

IPv6-based sensor networks are emerging, but application-layer access and service discovery are still open questions and the interactions with low-power radio duty cycling mechanisms are still poorly understood. In the Contiki operating system, Cisco and SICS have previously developed a low-power IPv6 Ready network stack. In the IPSN project, we develop a REST-based application-layer access infrastructure on top of the underlying low-power IPv6 network. To achieve application-layer access, we also develop a service registration and discovery layer. The results of IPSN allows low-power application layer access to the low-power IPv6 networks that will form the future Internet of things.

S. J. Ben Yoo, University of California, Davis
Optical Networking

Title: Flexible Capacity and Agile Bandwidth Terascale Networking Abstract: We propose to investigate a novel networking technology based on flexible assignment and routing of spectral bandwidths rather than wavelengths or channels.  The proposed networks exploit a new class of 100 G~1 Tb/s transmitters and receivers with elastic bandwidth allocations, and a modified ROADMs and OXCs with spectral slicing capabilities.  The resulting flexible-bandwidth networking can achieve the following advantages: * Dynamic optimization of elastic bandwidth allocations and configurations in the networks, * Spectrally and energy efficient network operation by minimizing unnecessary transmissions, * Bandwidth squeezed protection/restoration by reserving minimum bandwidths in case of failure, * Flexibly accommodating very small (subwavelength) and large (hyperwavelength) flows/circuits, * Simplified inventory of one kind of flexible-linecards rather than many different fixed-linecards, * Seamlessly interfacing optical networks to wireless or wireline networks, while supporting future services, * Accommodating all formats and protocols including OFDM, OTN, and other schemes, * Compared to SLICE and VCAT methods pursued at NTT, Infinera, and other schemes, the proposed schemes are most versatile and exploits innovative integrated technologies developed under previous CISCO and DARPA support. We propose to investigate theoretical and experimental studies of the above aspects of the innovative flexible capacity networking under CISCO support through setting up the world's first flexible capacity networking testbed.

Li Chen, University of Saskatchewan
Logic In-Field Repair Research

This proposal introduces a built-in self-repair (BISR) technology to address the logic in-field repair. We propose a block-based logic repair method, taking into the consideration of regularity of random logic. The reconfigurable elements can be gates, adders, multipliers, or even higher level components. 4 or 8 elements can be grouped into one block, for example 4 functional elements including one as backup. There are switch matrix for input of the blocks to select the corresponding elements. Most of the faults including stuck-at faults and open-circuits faults can be isolated using this switching method. Built-in self-test (BIST) is to be performed for each local logic blocks to detect the faults during every power-up event. A test pattern is applied to the reconfigurable block by a self-test scan chin, which is administrated by a central unit. Current sensing techniques can be combined into the scan chain for fault detection. This method can be expected to have a hardware overhead of 10-20% if large blocks are used for switching. 

Sang H. Baeg, Hanyang University
Logic In-Field Repair Research

Major reliability issues such as SER, NBTI, and VDDmin are increasingly difficult to detect and diagnose in emerging technologies. The proposed research work is to develop innovative way of designing flip-flops to detect and diagnose such reliability faults in flip-flop elements. In the proposed method, the noise margin can be programmed for a subset of flip-flop elements; it establish targeted reliability window to selectively screen the reliability level of corresponding flip-flops. A novel method of grouped flip-flop (GF) is proposed as the key structure to effectively implement the programmable noise margin for GF. The research activity includes re-designing the scan logic in GF, performing physical structure design of GF, finding test pattern dependency, and determining the programmable noise margin range with the N flip-flops in GF.

Olivier Bonaventure, Université catholique de Louvain
Preserving the reachability of LISP xTRs

The main focus of this proposed research is to develop techniques to allow Ingress and Egress Tunnel Routers (xTR) that are using the Locator/Identifier Separation Protocol (LISP) to deal efficiently with failures. We will focus on LISP sites that are equipped with at least two xTRs. Our first objective is to develop cache synchronisation techniques to allow the ITRs of a site to maintain synchronise mapping caches in order to ensure that the site's outgoing traffic is not affected by the failure of one of its ITRs. Our second objective is the development of techniques that allow LISP-encapsulated traffic to be preserved even when one of its ETR becomes unreachable.  We will evaluate the performance of these techniques by using trace-based simulations and lab experiments with LISP-click.

Sonia Fahmy, Purdue University
Routing and Addressing

Detecting and preventing routing configurations that may lead to undesirable stable states is a major challenge. We propose to (1) design and develop a prototype tool to detect BGP policy interactions based on a mathematical framework, and (2) apply this prototype tool to our unique datasets obtained from multiple ISPs.  We will identify the conditions under which unintended or problematic routing states can occur, and develop algorithms to detect these conditions.  The goal of our study is not only to show potential problems that may occur from current configurations, but also study the minimum information ISPs must exchange to confirm safe global policy interactions.

Alan Bovik, University of Texas
Video (Open call)

The next generation of wireless networks will become the dominant means for video content delivery, leveraging rapidly expanding cellular and local area network infrastructure. Unfortunately, the application-agnostic paradigm of current data networks is not well-suited to meet projected growth in video traffic volume, nor is it capable of leveraging unique characteristics of real-time and stored video to make more efficient use of existing capacity. As a consequence, wireless networks are not able to leverage the spatio-temporal bursty nature of video. Further, they are not able to make rate-reliability-delay tradeoffs nor are they able to adapt based on the ultimate receiver of video information: the human eye-brain system. We believe that video networks at every time-scale and layer should operate and adapt with respect to perceptual distortions in the video stream. In our opinion the key research questions are: * What are the right models for perceptual visual quality for 2-D and 3-D video? * How can video quality awareness across a network be used to improve network efficiency over short, medium, and long time-scales? * How can networks deliver adaptively high perceptual visual quality? We propose three research vectors. Our first vector, Video Quality Metrics for Adaptation (VQA), focuses on advances in video quality assessment and corresponding rate distortion models that can be used for adaptation. Video quality assessment for 2-D is only now being understood while 3-D is still largely a topic of research. Video quality metrics are needed to define operational rate-distortion curves that can be implemented to make scheduling and adaptation decisions in the network. Our second vector, Spatio-Temporal Interference Management (STIM) in Heterogeneous Networks, exploits the characteristics of both real-time and streamed video to achieve efficient delivery in cellular systems. In one case, we will use the persistence of video streams to devise low overhead interference management for heterogeneous networks. In another case, we will exploit the buffering of streamed video at the receiver to opportunistically schedule transmissions when conditions are favorable, e.g., mobile users are close to the access infrastructure. Our third vector, Learning for Video Network Adaptation (LV), proposes a data-driven learning framework to adapt wireless network operation. We will use learning to develop link adaptation algorithms for perceptually efficient video transmission. Learning agents collect perceptual video quality distortion statistics and use this information to configure cross-layer transport and interference management algorithms. At the application layer, we propose using learning to predict user demand, which can be exploited to predicatively cache video and smooth traffic over the wireless link.