Connection to the Research

Award Recipients

Srinivasan Ramasubramanian, University of Arizona
Fast Recovery from Multiple Link Failures in IP Networks

The goal of this project is to demonstrate a scalable fast recovery technique for recovering from multiple link failures. A k-edge connected network will remain connected after any arbitrary (k − 1) link failures. In such a network, we will show that we can recover from any arbitrary (k − 1) link failures by employing k arc-disjoint spanning trees for each destination.

The highlights of our approach are: (1) Packets are forwarded on shortest path tree when there are no failures; (2) Every router has exactly (k + 1) forwarding entries per destination; (3) Every packet carries a (k + 1)-bit overhead; (4) The computation time for k arc-disjoint trees per destination is sub-quadratic in the size of the network; and (5) Forwarding and recovery decisions are made with only local information and information carried in the packet. In this project, we seek to understand, analyze, and evaluate the performance of IP fast rerouting using arc-disjoint trees and compare its performance against previously known techniques for IP fast recovery.

Søren Forchhammer, DTU Fotonik
Video Quality Assessment Methodology for Multi-Screen Experience

The evolution of video delivery to end customers is becoming increasingly heterogeneous. "Video to any device" and ABR (Adaptive Bitrate streaming) poses new challenges to video quality, e.g. constancy of video quality, which is a key factor in keeping end users connected to the video. The study shall extend the scope of objective and subjective video quality assessment, to encompass novel classes of video delivery and to characterize these. A video quality (VQ) framework involving different (hybrid) compression techniques, eg. MPEG2, AVC, HEVC, will be developed, based on common features related to video compression. The expected outcome is a general framework, which can handle all the compression standards an easily extended to future standards. VQ across multiple screens and formats involves issues as video resolution, frame rate, screen size and bit-rate, which we shall characterize as the space of coded video display. The evaluation shall include video constancy over time and over different displays including 2nd/3rd screen video and ABR. The goal of the project is a unified framework for video quality within the scope of the project.

Dongyan Xu Purdue University
Advanced Targeted Attack Detection via Malware Protocol Reverse Engineering and Log Analytics

Enterprises today face increasing threats from advanced targeted attacks (also called advanced persistent threats or APTs), which tend to be stealthy, low-and-slow, and sometimes disguised via psychosocial campaigns. Unfortunately, curre, nt cyber attack defense, such as signature-based firewalls and manual malware analysis tools, are not catching up with such attack trends. Meanwhile, enterprises continue to generate large volume of logs/traces at both network and system levels but they remain under-utilized in cyber attack detection and investigation. In this research, we propose to develop an integrated framework for advanced targeted attack detection which consists of two major components: (1) X-Force, a zero-day malware sample reverse engineering engine and (2) LogAn, a Big Data analytics platform for network and system log analysis and correlation. As a typical usage scenario, a piece of unknown malware binary is captured by an existing enterprise security facility (e.g., email attachment inspection). However, traditional malware analysis tools cannot even get it running because of its advanced packing and obfuscation logic. With our framework, this unknown binary will first be analyzed by X-Force, which automatically reverse-engineers the malware's communication protocol between its peers and with the attacker. The reverse engineering results will be in the form of malware message formats (with both syntax and semantics) and protocol state machine. These results will then guide LogAn to analyze massive network and system logs from the production network and end-hosts, to detect malicious network flows, reconstruct malware messages, and identify infected hosts and enterprise assets being targeted, all in a timely fashion hence preventing or minimizing loss. A prototype of the framework will be developed, demonstrated, and evaluated in terms of detection accuracy and speed.

Russell J Clark, Georgia Institute of Technology
Software Defined Infrastructure Research and Education

This project will support the development of educational materials in the area of Software Defined Infrastructure. It will provide funding for student and research faculty who will operate the SDI equipment and technologies in the support research efforts of faculty and students, both graduate and undergraduate. The funding will also support our efforts to foster new application ideas for SDI.

Victor Lempitsky, Skolkovo Institute of Science and Technology
Intra- and across-camera data association for video surveillance

The proposal focuses on the data association problems within visual surveillance. Such problems arise in at least two forms. Firstly, there is a task of tracking multiple people within a single surveillance video, where it is needed to associate noisy person detections in individual frames in order to obtain person trajectories (tracks). Here, the association is performed based on visual appearance, but also based on spatial proximity and velocity analysis. The second class of data association problems arises in most practical surveillance application scenarios such as retrieval, tracking across sparse camera arrays, and long-range track analysis. Here, it is required to associate tracks observed by different cameras with non-overlapping fields of view. In this case, the visual appearance cues (such as colors, shapes, gait, demographics-level features) are the only cues that can be taken into account.

We will work on both association-related tasks, i.e. we will both address the task of visual appearance-based association and the task of multi-person tracking in videos. The two tasks are naturally dependent, as better appearance-based association would benefit the multi-person tracking. Following the recent trends in computer vision, we will use large-scale ``deep'' machine learning to build better visual appearance descriptors to benefit the appearance-based association and also to enable the appearance-based estimation of demographic properties. For the multi-person tracking, we will develop algorithms that will integrate smart detection candidate generation approaches such as detection cascades and the recent tracking-by-detection approaches for the multiple track inference. We plan to evaluate the developed methods on the real-life GIS-registered multi-camera setup that we are currently deploying at Moscow State University.

Qiang Xu, The Chinese University of Hong Kong
Concurrent Testable and Reliable 2.5D SoC Design with Reconfigurable Logic Dies

This proposal investigates a novel expandable 2.5D SoC design methodology with health monitoring, concurrent online test and self-healing capabilities, by stacking a dedicated FPGA die for health management onto silicon interposer together with the original design. To be specific, we construct a hierarchical health-monitoring infrastructure and the corresponding in-field repair scheme for the SoC design, by adding various types of sensors on the dedicated FPGA die. With its dynamic reconfigurability, the test patterns/sequence applied to the system can evolve with the changing test needs, thus enabling adaptive and concurrent online test with better fault coverage at low hardware cost. Moreover, once a particular component is deemed as defective, a copy implemented on the FPGA would serve as a self-healing solution while the rest of the FPGA logic (if any) can still be used to concurrent test and repair other modules.

Dimitris Gizopoulos, University of Athens
Accelerated Online Detection of Functional and Performance Errors in Complex Multiprocessor Chips

This is the second year follow-up proposal; Cisco generously provided a research gift to the Computer Architecture Lab of the University of Athens for the first year (2013) of this 3-year research project through the Silicon Valley Community Foundation. In this follow-up proposal the progress of the first year (includes among others two accepted papers and a third under preparation), the current status of the research, and the research plan for the second year (2014) are described. We are grateful to Cisco and the Silicon Valley Community Foundation for supporting our research. This is the first time a Greek university group is funded by Cisco, one of the leading companies worldwide. This fact has been greatly appreciated in the University of Athens and the Greek research community. We are also grateful to our Cisco sponsor Dr. Bill Eklow for his continuous guidance and support to our work. We hope the progress of our research during the first year meets Cisco's high quality criteria and justifies a second year follow-up support of the research.

Research Brief Summary
The overall project focuses on the challenges that complex multiprocessor chips bring to online hardware error detection. Multiple processors instances are now abundant in both general purpose microprocessors and application specific processors. Although error detection in single processor cores has been studied in the past, emerging multiprocessor chips bring new challenges. Our 3-years research focuses on the following challenges of generic many-core architectures (general purpose multiprocessors or specialized MPSoCs including NPU architectures): (a) Online error detection time optimization: test application time (and thus error detection latency) in multicore architectures can severely increase with the number of processor cores n and the on-chip memory cores m. Emphasis is on significant reduction of test execution time (first results already show several times of speedup using our algorithms compared to naïve application of tests to all cores). (b) Unification of error detection for processor and memory cores. Co-existence of processors and memories in such massively parallel architectures requires a careful orchestration of test application and error detection. This part of the research is closely related to part (a) as it determines the requirements of the test programs for both types of cores and the ability to execute them in parallel. (c) Differentiation between functionality and performance faults. The massive investment of hardware resources in multicore architectures can be salvaged in the presence of hardware faults when such faults are classified in those that affect correctness and those that affect performance only. A multicore chip can continue operating in degraded (but still correct) mode in the presence of performance faults only.

Konstantinos Psounis, USC
Rateless encoded UDP for error-resilient wireless links

It is well known that wireless connections are challenging environments for large data transfers to take place. Current reliable transport layer protocols are not "wireless friendly", e.g. the TCP back-off and recovery mechanisms react in the same way for lost packets due to congestion or corruption, and timeouts due to lossy links cause TCP to substantially under utilize the link. Despite the large body of work over the last decade to design wireless-friendly versions of Internet protocols, the problem is still acute. Recently, a new promising approach based on advanced coding techniques including forward error correction, linear network coding and rateless codes has emerged. However, existing proposals don't address important real-world considerations. Motivated by this, in this proposal we introduce a novel approach based on a rateless encoded UDP-style transport protocol. In the course of the project we plan to analyze its performance, address practical considerations, implement the protocol, and, in collaboration with Cisco engineers, test it in the context of point to point lossy wireless links.

Sharon Goldberg, Trustees of Boston University
Hardening the RPKI against Faulty or Misbehaving Authorities

The Resource Public Key Infrastructure (RPKI) is a new security infrastructure that relies on trusted authorities to prevent some of the most devastating attacks on interdomain routing. The threat model for the RPKI supposes that authorities are trusted and routing is under attack. The proposed research seeks to mitigate the risks that arise when this threat model is flipped --- when RPKI authorities are misconfigured, compromised, or compelled to misbehave. The proposed work will directly affect the standardization and deployment of the RPKI and has three main thrusts: (1) Designing distributed systems for robust distribution of RPKI objects. (2) Designing monitoring systems that can detect RPKI errors, misconfigurations, and manipulations. (3) Increasing transparency by augmenting the RPKI architecture with cryptographic techniques that make it easier to distinguish normal system behavior from misconfigurations and abuse by RPKI authorities.

Z. Morley Mao, University of Michigan
SDN-based Local and Cloud analytics for IoE/IoT Service Support

The goal of this project is to develop a software infrastructure with associated algorithms and systems for SDN-based data analytics that effectively support IoE/IoT services. The amount of data generated by such services can be overwhelming in terms of bandwidth requirement as well as CPU and memory overhead to carry out intelligent data analysis. A flexible and modular software infrastructure that leverages SDN programmability is an important technique to dynamically determine the choice between local data correlation and cloud-based backend processing. Furthermore, the actual processing performed is also determined based on the application service logic as well as actual data observations. SDN allows the flexibility of controlling network components such as Cisco 819 Integrated Services Routers at different locations which in turn can control the sensors on the fly. We also focus on ensuring the reuse of the software architecture for different application scenarios, accommodating diverse requirements through a component and flexible programming interface design.

The proposed systems, algorithms and methods will be empirically evaluated and our prototype will be deployed on a Cisco 819 router to demonstrate the intelligent local and remote cloud analytics supporting a few representative IoE/IoT services.

P. Brighten Godfrey, University of Illinois at Urbana-Champaign
High-throughput routing for unstructured data center networks

This proposal is the second phase of our project, named Jellyfish, which is developing a novel data center network interconnect with the goals of (1) flexible expandability and (2) high throughput. Funded by Cisco in 2012, Jellyfish used a high-capacity network interconnect which, by adopting a random graph topology, yields itself naturally to incremental expansion; and surprisingly, is 25-40% more bandwidth-efficient than traditional hierarchical data center network topologies. The first phase of the project focused on fundamental topology design, performance evaluation, and evaluation of multipath TCP.

Based on feedback from interaction with Cisco engineers and others, in this next phase we focus on high-throughput routing for unstructured networks. This is both the most critical component to practical deployment of Jellyfish, and is also of significant independent interest for Software-Defined Networking (SDN) controllers. Even in traditional highly structured networks, SDN controllers will encounter irregular structure due to network failures which are common in large networks. Our work will enable consistent high throughput even in these exceptional cases and in nontraditional topologies like Jellyfish.

In addition, we will produce and freely release a suite of tools to benchmark the performance of data center topologies, and routing within those topologies, in order to enable open, reproducible research.

Jeongyeup Paek, Hongik University
Enhancing IETF RPL protocol to Support Mixed Mode of Operations for Internet of Things

IP based wireless sensors/actuator networks are rapidly being deployed for smart electrical grid, home automation, industrial monitoring, health care, and the Internet of Things (IoT) in general with millions of devices connected over wireless mesh networks. RPL is an IPv6 routing protocol for low-power and lossy networks (LLN), designed for resource constrained embedded devices to meet the requirements of wide range of such LLN applications. In this project, we focus on the problem of supporting two modes of operations (MOPs) defined in the RPL standard (RFC 6550). Specifically, we plan to prove that there exists a serious connectivity problem in RPL protocol when two MOPs are mixed within a single network, even for standard-compliant implementations, and may result in network partitions. Our goal is to propose major enhancements to the RPL protocol so that nodes with different MOPs can communicate gracefully in a single RPL network while preserving the high bi-directional data delivery performance and backward compatibility. This will enable the design of RPL networks with heterogeneous MOPs which open potential for novel application developments.

Douglas Comer, Purdue University
Lightweight Mesh Network Protocols For The Internet Of Things Using Wi-SUN FAN

We propose to design, implement, and assess a self-organizing mesh network and a home gateway using 802.15.4 radios and protocols being standardized by the IETF and adopted by the Wi-SUN Alliance as part of their Field Area Network (FAN) suite.

The project has significant potential benefits for Cisco. Before the protocols can be adopted widely, a reference implementation must be available for use in interoperability testing. Our reference version will be available to others, and will help spread IoT protocols to other academic and industrial groups.

After building a reference stack, we will assess performance, security, including secure/trusted provisioning and configuration, secure communication, and lightweight implementation of security mechanisms in constrained devices.

Lan Wang, The University of Memphis
Scaling NDN Routing through Name Mapping

The Named Data Networking (NDN) is a newly proposed design for the future Internet architecture that enables efficient and secure data distribution. However, one commonly raised concern about NDN's feasibility is its routing scalability. To scale NDN routing, we propose mapping application names to the corresponding providers' ISP names and maintaining only ISP names in the global routing system. Our work will focus on the design, implementation and evaluation of two types of name mapping systems.

Nick Feamster, Georgia Tech
Demand Characterization and Management for Access Networks

The growth of content and our reliance on well-performing, highly available access networks is outpacing last-mile capacity and performance. We need better ways of understanding the demands that users place on access net- works, and to what extent users might adjust their behavior when subjected to different incentives (whether monetary or otherwise). Unfortunately, despite the proliferation of broadband connectivity in homes around the world, little is known about how users actually use these networks and to what extent different incentive mechanisms can help manage existing demand. The project will proceed in three stages:

  1. Characterization study. We will use data on user traffic demands that we can collect from the BISmark platform to characterize the current traffic demands of home users. We will also attempt to characterize user traffic demands with additional datasets; through our existing research collaborations at University of Cambridge and King's College London, we are also aiming to study usage data on BBC's iPlayer.
  2. Algorithms for demand management. We will perform controlled experiments to determine whether various demand side management approaches-such as imposing tiered pricing based on time of day, or content or application type-could serve as effective ""peak shaving"" mechanisms. The results of our study will inform service providers about current demand trends and provide insights about how different pricing and contract structures can help manage user demand.
  3. Prototype system and deployment. Finally, we will apply the lessons from our controlled experiments to design a prototype system that will provide incentives to users to shift their traffic demands based on their current usage patterns. This prototype will naturally extend our existing work on software-defined networking systems for managing resource usage in home networks. In addition to developing the network management systems for performing accounting and resource management, we will work with researchers in human-computer interaction to design interfaces that allow users to respond to the incentive mechanisms that we design in other parts of this project.

Ravi Sandhu, The University of Texas at San Antonio
Securing Named Data Networking with Lightweight Integrity Verification

Named Data Networking (NDN) is a recently proposed Future Internet architecture that aims to address several issues of host-based IP network, such as multi-path forwarding, in-network content cache, and location-independent data access. Security is a built-in feature in NDN by signing each data packet so that network nodes and end users can verify the authenticity and integrity of a content object. However, caching and location-independent content access disables the capability of a content provider to control content access, e.g., who can cache a content. To address this issue, we propose a Lightweight Integrity Verification Architecture (LIVE), an extension to NDN protocol, to enable this capability. LIVE leverages universal content signature verification mechanism in NDN to control content access in NDN nodes, i.e., authorized nodes will successfully verify the integrity of a data packet thus can cache and access it, while unauthorized nodes cannot verify this and will drop it. We propose a lightweight signature scheme for content routers by leveraging hash functions. In particular, we introduce opportunistic encryption in LIVE to thwart compromising and bypassing LIVE in NDN by adversaries. We will implement LIVE in the NDN prototype and demonstrate the benefits of LIVE by experiments.

Willie Donnelly, Waterford Institute of Technology
Semantic Uplift of Network Data to Support Enterprise Social Networking Services

This project will develop techniques for semantic uplift of network data to support and improve the design and delivery of enterprise social networking services. Understanding end-user experience when using social networking services in the enterprise can help providers tailor their services to better meet user expectations. Social network analytics are often focused on the social network graphs and community interactions, but currently are agnostic of data related to service performance and usage that could be collected and processed directly from the enterprise's networking devices. On the other hand, communications network analytics are used to gauge network performance and to estimate end user quality of service through the calculation of key performance indicators; these estimates are not specifically tailored to social networking services, which focus more on macro level interactions among users. Social network providers typically offer their services Over-The-Top (best effort) of the network infrastructure and make limited guarantees on service quality. If enterprise network analytics were able to identify and present social network value-added data, it would improve the enterprise's ability to deliver more compelling social network services for its employees, further than presence alone. Also, the enterprise would have more insight into how they can leverage and manage social network related traffic.

We intend to accurately identify and classify social networking related data from communications networks and develop efficient semantic processing and uplifting techniques to store it in a semantic database. Semantic uplifting involves the processing and annotating of low level data with semantic information relevant to a particular application or context. This information can then be linked to existing and newly designed semantic web based ontologies, such as Friend of a Friend (FOAF), so that it can be leveraged in existing social networking services. The linking process must be supported with dedicated semantic annotation algorithms that can classify the types of data being collected. Within the enterprise, social network services can subscribe to this new source of information about social network activities, and the network operator can use the data to apply network policy control to better adapt its infrastructure to social network usages.

John W. Woods, Rensselaer Polytech
Statistical Multiplexing MD-FEC

We combine MD-FEC and statistical multiplexing (stat-mux) to work in the multiple video environment. Due to non-convexity of the MD-FEC error as a function of total bandwidth, we here consider the case of just two videos and develop a computational-based solution for a discretized version of the stat-mux bandwidth assignment problem. This is followed by a presentation of some simulation results that show there is a useful range of server bandwidth where the stat-mux and two MD-FECs work together (are synergistic) providing significant increases in performance versus optimal solutions without combined stat-mux and MD-FEC.

Luigi Iannone, Telecom ParisTech
The LISP Views Project

The Locator/ID Separation Protocol (LISP), proposed by Cisco and currently under standardization at the IETF (Internet Engineering Task Force), is an instantiation of the paradigm separating locators and identifiers. LISP improves Internet's scalability, also providing additional benefits (e.g., support for multi-homing, traffic engineering, mobility, etc.) and having good incremental deployability properties. Indeed, there exists already several implementations (both proprietary and open source) of LISP, all gathered together in an international testbed (http://www.lisp4.net), scattered worldwide. Such a testbed is very important to develop and improve the protocol, however, it is not sufficient to have a complete view and understanding of the behaviour of LISP. Like for the BGP case, where projects like RouteViews provide information on the state, dynamics, and evolution of the network through both summarized and raw data, long term passive and active measurements must be performed to master the LISP technology in all its aspects. The LISP Views Project aims at providing the LISP community with the right set of monitoring tools, specifically designed for LISP, and raw data to support the long term evolution of LISP.

Saverio Mascolo, Politecnico di Bari
Architecture for Robust and Efficient

Control of Dynamic Adaptive Video Streaming over HTTP "According to a recent report published by Cisco, the video traffic will account for more than 90% of the global Internet traffic in 2014. Such a tremendous growth is fed by video streaming applications such as YouTube, which delivers user-generated video content, or NetFlix, which streams movies and already accounts for more than 20% of the USA Internet traffic. Another trend that is feeding this growth is the ever increasing number of smart-phones and tablet devices accessing the Internet by using 3G/4G wireless mobile connections [7]. Adaptive video streaming represents a key innovation wrt classic progressive download streaming a là YouTube, where the video is encoded at a constant quality or bit-rate and it is delivered as any other file over HTTP

.

This proposal aims at designing a robust, efficient and scalable control system for adaptive (live) video streaming over the best-effort Internet. The first part of the proposal will investigate, using a rigorous mathematical control approach, the advantage of designing a server-side with respect to a client-side adaptation method such as the one supported by Apple, Akamai or Microsoft IIS. The mathematical model will take into account propagation delays that play a critical role in such control system and it will be rigorous and yet simple so that it will provide a tool for easy designing and tuning of the control. The second part of the proposal will implement the adaptive video streaming system first on a single server located in our lab and then in a Amazon EC2 cloud or a ""Cisco Multimedia Cloud"" aiming at further deployment in production networks.

Yao Wang, Polytechnic Institute of New York University
Video adaptation for HTTP Live Streaming Using Perceptual Quality and Rate Models Considering the Impact of Temporal Variation of Frame Rate and Quantization</p>

In video streaming, the video coding rate has to be changed dynamically in response to changes in the sustainable bandwidth at the receiver. To optimize the perceptual quality, one must consider the impact of varying the video rate (often realized by varying quantization stepsize, frame rate, and frame size) on the perceived video quality. We will perform subjective quality experiments to systematically evaluate the impact of such variations, and develop models that analytically relate the quality with the variation patterns of different encoding parameters. We will further design practical video adaptation algorithms under the recently standardized dynamic adaptive streaming over HTTP (DASH) framework, that can improve users' quality of experience by exploiting the developed quality model. Although the long term goal of this project is to consider the impact of varying all three encoding parameters, we will focus on the adaptation of quantization stepsize and frame rate, for the requested funding period.

Kevin C. Almeroth, University of California, Santa Barbara
Packet Scheduling Using IP Embedded Transport Instrumentation

Explosive growth in multimedia streaming to the home access network has driven an increase in complexity in multimedia streaming systems. Modern multimedia streaming systems contain buffers, as well as control systems such as Adaptive Bit Rate (ABR) to prevent re-buffering events. The system in place now is quite complex and increases the need for good management of resources at the home access router. Current management technologies such at the Hierarchical Token Bucket (HTB), and Hierarchical Fair Service Curve (HFSC) use a statically assigned ceiling rate for the root of the scheduler. Because this parameter is static and the actual bandwidth provided by the ISP fluctuates severe problems are encountered. If the upstream bandwidth fluctuates above the statically assigned ceiling rate then bandwidth is wasted, if if fluctuates below than management becomes poor. In this research we will build an Active Packet Scheduling (APS) system. Our packet scheduler self-tunes to the network conditions preventing bandwidth waste, and providing superior bandwidth management capabilities.

Jerry D. Gibson, University of California, Santa Barbara
Cloud-Based HDR Video Post-Processing for Mobile and Asymmetrical Telepresence

Cisco Telepresence has set the standard for high quality video communications. With increasing resolution and screen size, and the inclusion of front-facing cameras on today's smartphones, it is evident that two-way communications between a Telepresence room and a mobile smartphone will become more important. Since smartphones are often used in uncontrolled environments, a particular challenge is capturing high quality video in poor lighting conditions, where a user's face may be obscured by overexposed and underexposed pixels. A solution to this problem would be to incorporate high dynamic range (HDR) video cameras into the smartphone. However, the need to keep smartphone costs low will preclude this solution for the near future. Smartphones already have the capability to capture HDR still photographs by combining multiple exposures [1,2]. This method works well for static scenes, but is not directly applicable to video, since pixels must correspond exactly between the different exposures and motion leads to ghosting artifacts. However, our recent research has addressed the substantial challenges involved using a combination of the following:

  • Dual-exposure algorithm to capture video with adaptively alternating short and long exposures (including a face-aware option).
  • Exposure matching algorithm to match the brightnesses in differently exposed frames.
  • Hierarchical block-based motion estimation to register frames with pixel saturation.
  • Methods to determine poorly registered pixels and blocks.
  • A bi-directional motion estimation technique to improve registration quality.
  • Pixel-wise motion refinement to correct poorly registered pixels.
  • Spatially adaptive filtering to smooth radiances using reliable edges and underlying motion vectors for registration artifact removal.

The end result [3] is high quality HDR video using inexpensive cameras found on mobile devices today. Currently, the applicability of our method is limited by the lack of a real-time implementation and the computational and battery power limitations of smartphones. In the proposed research, we move the required signal processing to the cloud and examine the tradeoffs involved in creating a real-time implementation. By minimally impacting smartphone requirements and smartphone design, HDR video capture can be achieved using all handheld devices and improve the video quality delivered from mobile devices to the Telepresence site.

[1] http://www.apple.com/iphone/built-in-apps/

[2] http://www.samsung.com/global/galaxys3/specifications.html

[3] For video results please visit http://vivonets.ece.ucsb.edu/HDR.html

Ken Goldberg, University of California at Berkeley
Open Source Architecture for Cloud Robotics

I propose a new research project at UC Berkeley to define an Open-Source architecture and APIs for Cloud Robotics, that would allow sharing across multiple robot platforms of data, code, and computation in the cloud.

Robotics is at an inflection point; over 5M floor vacuuming robots, 10K military robots, and 2K surgical robots have been adopted in the last decade. The Google self-driving car indexes a vast library of maps and images; it exemplifies a new paradigm of ""Cloud Robotics"". The Cloud is a rapidly expanding resource for massively parallel computation and real-time sharing of vast data resources. Cloud Robotics has potential to significantly improve robots in five ways: 1) offering a global library of images, maps, and object data, often with geometry and mechanical properties, 2) massively-parallel computation on demand for sample-based statistical modeling and motion planning, 3) robot sharing of outcomes, trajectories, and dynamic control policies, 4) human sharing of ""open-source"" code, data, and designs for programming, experimentation, and hardware construction, and 5) on-demand human guidance (""call centers"") when all else fails. In this one-year project, we will investigate items 1-4 in the context of grasping and manipulation in the context of two Cloud-based robot systems. This work will extend results described in our paper: Cloud-Based Robot Grasping with the Google Object Recognition Engine. Ben Kehoe, Akihiro Matsukawa, Sal Candido, James Kuffner (Google), and Ken Goldberg. Submitted to IEEE International Conference on Robotics and Automation. Karlsruhe, Germany. May 2013.

Edmund Yeh, Northeastern University
Joint Forwarding and Caching Algorithms for Named Data Networking

Named Data Networking (NDN) is a new content-centric architecture which focuses on effectively mediating between consumers and producers of content, rather than merely focusing on establishing connections between sources and destinations. The effectiveness of the NDN architecture depends in large part on a high-performing and scalable strategy layer involving forwarding as well as cache management algorithms. Recently, the PI's group has developed joint forwarding and caching algorithms for the NDN architecture which have demonstrated promise in practical network settings. This proposal focuses on (1) developing refinements to the algorithms which minimize the average delay in fetching data packets; (2) developing techniques for reducing the amount of state maintained by the algorithm; (3) investigating generalizations of the algorithm where multicast can be used to disseminate the interest packets; (4) extending the current algorithms to incorporate congestion control, subject to appropriate notions of fairness; (5) investigating the extension of the current algorithms to wireless, mobile, and delay tolerant settings; (6) adapting the algorithm to the more realistic case where the number of data objects is changing with time, and interest packets are subject to timeouts and retransmissions; (7) developing a theoretical justification for the optimality of the proposed algorithm; and (8) macroscopic performance scaling analysis for the NDN architecture.

Pattie Maes, MIT Media Lab
TeleStudio: A Video Platform for Creative Collaboration at a Distance

Our families, colleagues, and networks of friends are becoming more geographically separated. Currently available video conferencing software provides a live window between remote spaces but does not allow participants to interact in the same digital space. Collaborative activities which typically require visual cues in the same physical space, such as presenting a lecture together, teaching each other a skill, playing, broadcasting, animating, acting, and dancing with another person are difficult and fragmented in most videoconferencing systems. The ability for participants to coordinate their actions is diminished in these environments because the viewer's focus shifts between their physical environment, the other person's environment, and relevant digital media. In my research in the Fluid Interfaces Group at the MIT Media Lab, I have developed a teleconferencing system that merges the foreground of each remote space with layers of dynamic media. Users can interact with each other and digital content via movements, objects, and gestures. They see themselves and the remote partner composited with the media content in one and the same window. We propose to take this technology out of the lab and into the living room with funding to support the development of a network protocol, segmentation techniques, and NAT traversal.

Jun Fan, Missouri University of Science and Technology
Accurate TSV Measurements in 3D ICs

Measurements are necessary to validate TSV modeling and characterize their electrical performance in high-speed 3D IC applications. However, due to the ultra small geometry, effects of test fixtures may introduce relatively significant errors that may be even more profound than the actual electrical performance of the TSVs. This proposal targets on improving the de-embedding techniques for TSV measurements up to 50 GHz. In particular, improvements for test fixture designs and de-embedding algorithms for measuring the electrical performance of a pair of TSVs, a canonical structure as the base of many TSV modeling and characterization, will be investigated.

Further, to understand the electrical performance and gain design guidelines in 3D ICs, in-situ measurements may be preferred. This proposal also includes building on-chip test structures using TSVs to measure the current of a TSV under study, which is very useful for understanding and characterizing high-speed signal and power distribution network performance in 3D ICs.

Jun Fan, Missouri University of Science and Technology
Year 2: Measurement-Calibrated Modeling and Early-Stage Design Exploration for 3D ICs

This proposal is for Year 2 of the funded project.

Compared to single-chip integration, 3D integration and 2.5D integration using through-silicon via (TSV) and silicon interposer offer shorter interconnect, smaller footprint, and heterogeneous integration at lower cost. Accurate modeling of TSV and interposes is critical for signal/power/thermal integrity in 3D or 2.5D integration. Measurement-calibrated, compacted equivalent circuit models for TSVs and interposer, especially for TSV arrays and for TSVs used by high-speed signaling will be developed in the proposed work, together with the accurate exploration of (1) die-to-die (D2D) integration and signaling scheme at the early design stage and (ii) physical design rules prior to any specific physical design.

Chris H. Kim, University of Minnesota
Electromigration Effects in Sub-32nm High Frequency Signal Wires: Characterization Techniques and Mitigation Guidelines

Electromigration (EM) is the primary back-end-of-line reliability concern in which metal atoms gradually migrate under the "electron wind force" at elevated temperatures creating voids (opens) or extrusions (shorts). Scaling of wire dimensions and the commensurate increase in current density and die temperature is making EM a greater concern in sub-32nm technologies. Furthermore, EM in gigaherz frequency signal wires is expected to have a greater impact on circuit performance that could lead to overly pessimistic design restrictions in future technologies. Unfortunately, high frequency EM effects in signal wires have not been thoroughly studied by the research community. The goal of this work is to demonstrate for the first time, practical test structures and on-chip circuits for characterizing the impact of high frequency EM on circuit performance. Based on the hardware data from a dedicated test chip, we will provide specific design guidelines and mitigation techniques for EM prevention in sub-32nm technologies.

Krishnendu Chakrabarty, Duke University
Advanced Testing and Design-for-Testability Solutions for 3D Stacked Integrated Circuits

Three-dimensional stacked integrated circuits (3D SICs) based on through silicon vias (TSVs) promise to overcome barriers in interconnect scaling, thereby offering an opportunity to achieve higher performance using CMOS. Since testing remains a major obstacle that hinders the adoption of 3D integration, test challenges for 3D SICs must be addressed before high-volume production of 3D SICs can be practical. Breakthroughs in test technology will allow higher levels of silicon integration, fewer defect escapes, and commercial exploitation

.

This three-year research project is among the first to study 3D SIC testing in a systematic and holistic manner. The research scope includes three topics: test content and test access, test cost modeling, and built-in self-test (BIST). In Year 1, we have developed innovative solutions for testing of TSVs, test generation to account for TSV stress, test cost modeling for selecting test flows, and built-in self-test solutions. This report summarizes progress made in Year 1 and identified the research plan for Year 2.

Hyungcheol Shin, Seoul National University, School of Computer Science and Engineering
Impact of Random Telegraph Noise (RTN) in 10 nm-scale Bulk FinFETs on the Stability of SRAM Cells

As the devices are scaling down, the impact of Random Telegraph Noise (RTN) on the performance and reliability becomes much larger in device and logic circuits. Especially, the impact of RTN on SRAM cells causes serious problems, such as bit-cell failure and Noise Margin reduction. Thus, we will analyze the impact of RTN on the operation characteristics and reliability of Bulk FinFET devices using 3D TCAD simulation. And, we will compare the impact of RTN on various structures of Bulk FinFETs. Based on these results, we intend to analyze the comprehensive degradation mechanism, which results in the prediction about the reliability degradation of bulk FinFETs of future technology nodes.

Based on the above researches, the impact of RTN on the stability of Static Random Access Memory (SRAM) cells consisting of 6 bulk FinFETs will be analyzed and compared with SRAM cells of conventional MOSFET or SOI FinFET technology. Moreover, the impact of RTN combined with the process variability will be analyzed using atomistic simulation. Finally, we can apply the results to the development of fabrication process and optimal structure that minimizes the reliability problems.

Alan Bovik, University of Texas
Perceptual Optimization of Wireless Video Networks

The next generation of wireless networks will become the dominant means for video content delivery, leveraging rapidly expanding cellular and local area network infrastructure. Unfortunately, the application-agnostic paradigm of current data networks is not well-suited to meet projected growth in video traffic volume, nor is it capable of leveraging unique characteristics of real-time and stored video to make more efficient use of existing capacity. As a consequence, wireless networks are not able to leverage the spatio-temporal bursty nature of video. Further, they are not able to make rate-reliability-delay tradeoffs nor are they able to adapt based on the ultimate receiver of video information: the human eye-brain system. We believe that video networks at every time-scale and layer should operate and adapt with respect to perceptual distortions in the video stream.

In our opinion the key research questions are:

  • What are the right models for perceptual visual quality for 2-D and 3-D video?
  • How can video quality awareness across a network be used to improve network efficiency over short, medium, and long time-scales?
  • How can networks deliver adaptively high perceptual visual quality?

Our new ideas. Our proposed research vectors are summarized in the Table on the next page. Our first vector, Video Quality Metrics for Adaptation (VQA), focuses on advances in video quality assessment and corresponding rate distortion models that can be used for adaptation. Video quality assessment for 2-D is only now being understood while 3-D is still largely a topic of research. Video quality metrics are needed to define operational rate-distortion curves that can be implemented to make scheduling and adaptation decisions in the network. Our second vector, Spatio-Temporal Interference Management (STIM) in Heterogeneous Networks, exploits the characteristics of both real-time and streamed video to achieve efficient delivery in cellular systems. In one case, we will use the persistence of video streams to devise low overhead interference management for heterogeneous networks. In another case, we will exploit the buffering of streamed video at the receiver to opportunistically schedule transmissions when conditions are favorable, e.g., mobile users are close to the access infrastructure. Our third vector, Learning for Video Network Adaptation (LV), proposes a data-driven learning framework to adapt wireless network operation. We will use learning to develop link adaptation algorithms for perceptually efficient video transmission. Learning agents collect perceptual video quality distortion statistics and use this information to configure cross-layer transport and interference management algorithms. At the application layer, we propose using learning to predict user demand, which can be exploited to predicatively cache video and smooth traffic over the wireless link. There are many synergies between the VQA, STIM, and LV vectors, which will be explored in the vector Synergistic Tasks (SYN). For example, we will prototype several algorithms and will use system level simulations to estimate capacity gains on a large scale.

Michael Honig, Northwestern University
Graph-based Modeling and Algorithms for Interference Mitigation

This proposal addresses the problem of mitigating interference at a receiver from potentially unknown interferers, and with limited spatial dimensions. In that scenario, the receiver must learn and exploit statistical properties of the interference. This becomes challenging when the set of interferers is time-varying, the channels are unknown and possibly non-stationary, and the receiver has limited degrees of freedom (across time, frequency, and space) with which to distinguish the desired user's signal from the interference. Graphical-based message passing provides a powerful framework for addressing statistical inference problems, and is well-matched to mitigation of unknown interference. Building on our prior related work on narrowband interference mitigation, we plan to develop and evaluate graph-based algorithms that account for any a priori or side information about the received signals and channel states, while adapting to the statistical properties of the interference.

S. J. Ben Yoo, University of California, Davis
Elastic Optical Networking for High Capacity Data Centric Networks

We propose to investigate a new adaptive and resilient control plane and to experimentally demonstrate the elastic optical networking by setting up the EON testbed integrated with the new control plane and the dynamic EON transmitter/receiver technologies based on optical arbitrary waveform generation and measurement (OAWG and OAWM). The proposed EON exploits a new class of hardware and software systems with the following new capabilities:

  • Adaptive and flexibly reconfigurable EON transceivers capable of modulation format switching and spectral grid tuning
  • Hitless defragmentation
  • EON testbed integration with the new control plane and the new OAWG/OAWM transceivers to demonstrate the following five experiments:
  • Normal connection setup:
  • Signal regeneration:
  • Signal failure detection and restoration:
  • Recovery from link failures:
  • Hitless defragmentation:
  • Adaptive optimization of elastic bandwidth allocations and network configurations based on real-time optical performance monitoring (OPM) enabled by supervisory channel embedded in the EON.

We propose to investigate theoretical and experimental studies of the above aspects of the innovative elastic optical networking under CISCO support.

David Pommerenke, Missouri University of Science & Tech
Feasibility of > 20 GHz Differential Channel Emulator

Channels can be characterized using measurement, full wave simulations or SPICE based simulations resulting in an S-parameter description. However, it is not always possible to predict the behavior of a link using simulation as models for transmitters and receivers are often incomplete or not accurate and adaptive processes require very long simulation times. In this case, a hardware emulation of the channel can help to determine the behavior of the channel. Rising data rates require a re-evaluation of existing technologies, as all commercially available hardware channel emulators have limited (usually low number of multi-tap FIR filters) performance at a very high price. The project will evaluate a wide range of existing technologies, implement prototypes using the newest set of integrated circuits and design a prototype of a novel technology based on 3D printing of structures that emulate S-parameters by selectively disturbing channels created from very low loss transmission lines. Outcome is an in-depth understanding of the technology involved and the core hardware elements of a channel emulator allowing > 20 GHz bandwidth.

Andrew Lippman, MIT Media Laboratory
Ultimate Media

We propose re-considering the world of visual media interaction from the perspective of the average, digitally aware person but in an extreme environment:

  • Extreme connectivity, both fixed and mobile
  • Extreme processing
  • Extreme visual presence
  • Extreme scale and scope
  • Extreme social connectedness

We will built an open, extensible platform for visual media that incorporates our desire for entertainment and balances it with the intention to inform.

Bharat Bhuva, Vanderbilt University
Effects of X-ray inspection system on advanced technology transistors

The X-ray inspection system is used to ensure that PCB manufacturing process meets the reliability standards set by the semiconductor industry. For ICs and other electronic components, the X-rays used during PCB inspection are absorbed by the semiconductor material. This will result in electron-hole pair generation throughout the transistor structure. For older technology nodes, the amount of charge generated and it's effects on transistor parameters was not significant. For transistors at nano-meter technology nodes, the X-ray inspection system causes a shift in individual transistor parameters significant enough to cause malfunctioning of individual ICs and entire PCB. This proposal is for identification of critical factors affecting transistor parameter shifts through experimentation as well as TCAD modeling. Once the physical mechanisms are understood, design/test techniques will be developed to mitigate the effects of X-rays.

Sang H. Baeg, Hanyang University
Retention Reliability Study Under One Row Hammer For DRAM Under 3x nm Technologies

Buried DRAM cells under 3x is failing retaining the content with repeated VPP stresses to a cell by injecting erroneous current to nearby cells. The objective of this proposal is to study the failing mechanisms in various levels of DRAM components: device, circuit, and component. The study consists of three main activities: 1) modeling and simulating charge injection and collection between aggressor and victim cells, studying the triggering parameters and their contributions to the severity of the issue, and correlating the triggering events to the failures at real components.

Paul S. Ho, The University of Texas at Austin
Measurement and Modeling of Thermal Stresses and Electromigration for Cu Through-Silicon-Via in 3D Interconnects

Responding to the Cisco RFP on 3D IC Integration and Enabling Technologies, we propose the following study with two objectives. The first is to measure the thermal stress characteristics of Cu TSVs in a blind Cu/Si via test structure, then use the results and material properties to develop thermomechanical model to evaluate the stress reliability for stacked via structures developed by Cisco. The second objective is to perform EM tests to measure the stress effects on EM lifetime and statistics of TSV microbumps. In the first task, we will combine the wafer curvature method with synchrotron X-ray microdiffraction to measure the stress/strain distribution and determine how the distribution is affected due to grain growth during thermal cycling. Blind Cu/Si via test structures will be used for these measurements where the results will provide the materials properties and stress and plasticity behavior of TSV structures. The results will be used to develop thermomechanical models to evaluate the effect of stress on the failure mechanism for TSV interposer stacked structures. This will be based on fracture mechanics to evaluate the stress-induced driving force for crack delamination and via extrusion for TSV structures [9]. Stress measurements and modeling will be used for reliability assessment of TSV interposer stacked structures developed by Cisco. The results will provide the information required to optimize the TSV fabrication process and TSV design rules for development of 3D interconnects with TSVs.

Joy Ying Zhang, Carnegie Mellon University
Privacy Preserved Personal Big Data Analytics through Fog Computing

In this project, we propose a novel approach of using fog computing to distribute the computing among mobile devices, personal cloud and public cloud and isolate the raw sensory data within the personal cloud where users have full control of their data. Users authorize analytic application widgets to run on their personal cloud and only submit user-approved statistics for behavior modeling applications.

This project will focus on the underlying science of privacy-preserved big data analytics and study the system requirements to develop real-world applications using fog-computing.

Jacob A. Abraham, The University of Texas at Austin
Effective Test Generation and Fault Grading Techniques for Multi-Core Systems

Analyzing complex systems for fault grading, test generation, or power estimation demands EDA tools that can manage millions or even billions of gates. Unfortunately, most of the available EDA tools are only able to deal with components in complex designs (such as multi-core processors and systems-on-chip). Our research has developed techniques for generating high-quality application-level tests and rapid fault coverage estimates for conventional instruction-set processors, by mapping component-level models to the system level. Analysis of the interconnection fabric of a large multi-core system can uncover mechanisms for accessing the different embedded modules from the boundaries of the system. The proposed research will develop application-level tests for multi-core designs and network processors by combining the tests for the embedded processors with the system-level access mechanisms. We plan to evaluate the techniques on multi-core designs as well as actual chips from Cisco.

Monika Jain, Indian Institute of Technology, Bombay
CampuSense: Creating a sustainable campus through sensor networks and data analytics

The main idea of this project is to create a feedback loop of Sense, Analyze and Act - using sensing networks and data mining to develop policies and make design decisions in the areas of energy efficiency and air quality that will impact the urban fabric of IIT Bombay campus. Data analytics platform will help create generic adaptable models that can be implemented in other campuses and scaled up to be applied on a city scale for energy efficient operations and management.

IIT Bombay, a 550 acre mini-city with around 15,000 people, provides a real-life experimental platform for various urban challenges faced today, specifically as it relates to energy efficiency in buildings, air quality inside and outside and efficient building operations through its life time. Understanding the building performance through its life-cycle and measuring various parameters that determine human comfort, health and energy efficiency of buildings is of paramount importance to promote sustainable urban development, hence real time monitoring can provide fine grained data for analysis.

This study aims to establish a correlation between various campus activities and events with air quality at various points on the campus which can also lead to evolving patterns of use that reduce pollution levels on the campus. As part of this project, we wish to understand this relation between air quality and other events and evolve a model of how they relate to each other in constrained area such as a campus.

Jeffrey Burke, The Regents of the University of California
Using Named Data Networking as a Transport for Web Real-time Communication Videoconferencing Technology

WebRTC offers new opportunities for videoconferencing in browsers, including improved options for peer-to-peer communication. But, implementations must contend with limitations of the current Internet architecture related to efficiency and scalability for multi-party conferencing, and security and reliability in mobile and other important contexts. The proposed research will explore the use of Named Data Networking (NDN) as a transport for WebRTC videoconferencing in a major browser (Chrome or Firefox), and compare it with the current IP-based solution for the same platform. A prominent architecture in the "information-centric networking" research area, NDN routes directly based on data names, rather than host addresses. This enables content caching at any router in the network, and, when combined with other features of the architecture (such as per-packet content signatures), offers the potential for more secure, more efficient videoconferencing in the browser. The project leverages the team's recent design and implementation of video streaming over NDN, and extends it into the important emerging area of standardized web-based conferencing.

Gail-Joon Ahn, Arizona State University
Policy-aware Secure Collaboration in Fog Computing

In this project, the PI will investigate the following important and challenging issue to support a client-centric approach to secure sharing and collaboration among services offered by Fog Computing: when multiple Fog Computing platforms with different security approaches and mechanisms collaborate to provide various services, how do we address heterogeneity among their policies so that when we compose multiple services to enable a large-scale application service, policy anomalies are effectively monitored, minimizing security breaches?

Constantine Dovrolis, Georgia Institute of Technology, College of Computing
Modeling the evolution of network protocol stacks

Network protocol stacks typically follow a multi-layer hierarchical architecture. What determines the required number of layers, or the number of protocols at each layer? The space of applications, services and user expectations (at the top layer) and the space of elementary functions (at the bottom layer) are constantly evolving -- how does this dynamic environment affect the organization of layered protocol stacks? What determines the evolvability of a protocol in the presence of changes? And how can we design new protocols that can effectively coexist, or even replace, older incumbents that are widely deployed? We will investigate these fundamental questions using two models, EvoArch and Lexis, aiming to identify broad properties that can guide protocol design and deployment decisions.

Shervin Shirmohammdi, University of Ottawa
Error-Resilient Scalable Adaptive Live Video Streaming over Best Effort Networks

Best effort networks, used by over-the-top (OTT) video services, are characterized by two main problems of random packet loss and dynamically varying available bandwidth, which interrupt the flow and presentation of the video to the end user, because buffering and retransmission are performed to recover the lost packets. While these interruptions are tolerated by users in video on demand (VoD) applications such as NetFlix or YouTube, they are applicable to neither live presentational video such as news/sports/concert broadcasting, nor live conversational video such as conferencing or telepresence, because by the time a lost packet is recovered, it has missed its playback deadline and is useless. As a result, today, live video transmission over OTTs is adversely affected by "jumps" and skipping of segments. In this project, we investigate adaptive live video streaming to heterogeneous receivers using over-the-top (OTT) video services and propose: 1- a server-side and/or proxy-side adaptive transport method for live video streamed to heterogeneous receivers; and 2- an instrument to collect necessary information from both the network and the video to measure metrics that determine how to best protect video against network loss.

Diana Marculescu, CMU
Efficient Many-Core Reliability Modeling, Analysis, and Optimization

During the last decade, the design and manufacturing of integrated systems have undergone dramatic innovations, at the price of decreased long-term or "lifetime" reliability. With continuous shrinking of device and interconnect sizes, the rate of transient failures is increasing. In addition, due to the increased transistor density without proportional downscaling of supply voltage, power density and operating temperature rise significantly, thereby further accelerating the failure mechanisms due to their exponential dependence on temperature. While the problem of reliability modeling and optimization has received a lot of attention in recent years, most efforts have been developed at device or circuit level, or perform system level lifetime analysis under assumptions that may not accurately reflect the silicon reality. This project addresses this missing link by developing a methodology for reliability modeling, analysis and optimization for many-core systems governed by a network-on-chip communication paradigm, subject to single or multiple event upsets. The proposed theoretical framework will allow for statistical analysis of soft error-related reliability for many-core systems in the presence of workload- and process-induced variations.

Indranath Dutta, Washington State University
A FUNDAMENTAL INVESTIGATION OF THE IMPACT OF CU VIA-SI INTERFACES ON THE STABILITY OF STACKED-DIE STRUCTURES

Interfacial sliding under a combination of stress and electric current, which can lead to intrusion or protrusion of vias within Si, can cause warpage of the thin dies in 3-D structures, and potentially cause fracture of solder micro-bumps. This has serious implications on package robustness/stability, the extent of the problem depending on the materials-set and process-protocols used. This proposal is aimed at studying these effects and assessing the role of materials and processes. The ultimate goal is to quantitatively predict the impact of electromigration-influenced interfacial sliding in TSV structures, and devise strategies to minimize its impact.

Prasad Calyam, University of Missouri-Columbia
Dual-mode GENI Rack for supporting Domain Science Use Cases on Campuses

University campuses are beginning to build and deploy Science DMZs to support use cases that involve for e.g., data-intensive science needing access to remote instrumentation/data resources, GENI infrastructure related experimentation on software-defined networking, and accessing cloud platforms for virtualization and storage. Although a variety of solutions are available from vendors and the open-source community, there is a need to integrate and federate Science DMZ resources (e.g., routers, switches, data transfer nodes, measurement points) in a manner that satisfies various domain-science and campus-enterprise needs. In this project, we aim to build upon the 'GENI Rack' concept and APIs to allow a "Dual-mode" operation that involves a "GENI On" mode for external/shared use of Science DMZ resources as a GENI community resource, and a "GENI Off" mode for exclusive use by campus researchers and their collaborators. Our proposed activities include: (a) enhancement of the GENI API software to handle campus aggregate resources reservation and renewal schemes, (b) development of novel OpenFlow controller applications that allow software-defined networking configurations to support domain-science use cases across wide-area networks between 2 remote campus Science DMZs, and (c) integration of a "programmable" perfSONAR feature set for software-defined measurement and monitoring of Science DMZs that allows for co-scheduling resources and performance monitoring.

Gerald Friedland, International Computer Science Insitute
Proposed ICSI Research Project: Event Detection for Improved Speaker Diarization and Meeting Analysis

We propose to conduct systematic research on unsupervised acoustic event detection methods that a) may improve the accuracy and robustness of speaker diarization and speech recognition systems for meetings and b) can be used to improve semantic indexing of meeting recordings.

Elio Salvadori, CREATE-NET
AGISCO: toward an AGIle control plane in Spectrum switChed Optical networks based on hybrid centralized/distributed architectural solutions

The introduction of advanced optical transmission technologies such as the CO-OFDM is fostering the migration from the legacy fixed-grid to the innovative flexi-grid allocation of optical spectrum. As a consequence, the control plane needs to be enhanced in order to handle the coherent transmission techniques and to manage the spectrum in an agile manner, thus providing a full support to the so called Spectrum Switched Optical Networks (SSONs).

The currently deployed Cisco Control Plane (CP) solution is based on a highly distributed architecture. The idea behind AGISCO project is to focus on investigating how CP solutions based on centralized functionalities could be beneficial to manage SSONs more effectively. Centralized solutions have gained increasing research interest in the last years thanks to the concepts of Path Computation Element (PCE) and Software Defined Networking (SDN). The concept behind the latter was initially not meant to be applied to optical networking, but the possibility to deploy a controller managing the entire network makes it an appealing solution for this context too. Since SDN in transport and optical networks is a very recent concept, it is far from being standardized. On the contrary, PCE centralized solution has been the object of standardization process within the IETF and its implementation to handle flexible optical networks is considered by many vendors. Actually some service providers see a CP architecture based on an openly programmable PCE an interesting solution since it could empower them to run their own algorithms, which computes optical channels routes in the network.

Differently from purely distributed CP solutions, architectures based on some centralized functionalities are not limited by the amount of information locally available and could potentially free up the CP by the complexity of the coordination process among all nodes. Thus, they can enable network optimization, for example, in terms of spectrum utilization, latency, and energy efficiency. The path from a highly distributed architecture toward a solution based on centralized entities promises to decrease the complexity in SSONs management and to increase the flexibility; on the other side the technical maturity and the standardization of some recent architectures (i.e., SDN) is still on an early stage. Starting from the knowledge of the limits of distributed approaches, a deep study of innovative hybrid CP architectures can lead to a comprehensive understanding of the functional aspects necessary to develop and test advanced spectrum management mechanisms.

Leonard J. Cimini, University of Delaware
Interference Management for Multiuser MIMO Wireless Networks in Uncertain and Heterogeneous Environments

The proposed work is a continuation of a successful project in understanding fundamental issues in MIMO beamforming (BF) for IEEE 802.11 networks. The enhanced capabilities promised by multiuser networks come with many challenges, in particular, the task of effectively managing system resources. Fundamental questions include: How to most effectively use the available DoF? Which users to serve and with how many streams, taking into account channel conditions as well as fairness and aggregate throughput? In the current and previous years, we have answered these questions, devising new beamforming algorithms (e.g., MS-IA) and scheduling approaches (e.g., Strongest-Gain Selection) in the process. For the coming year, we propose to continue our research on MU-MIMO BF. Important tasks that we would like to tackle include: Completing our research by developing guidelines for when to use MU-MIMO versus SU-MIMO, especially with respect to uncertainty in CSI; developing robust beamformers that work well over a range of uncertainty in the CSI measurements; devising scheduling and resource allocation algorithms to maximize the capacity in the presence of receiver and network heterogeneity; and examining the issues related to channel estimation, robustness, and low-complexity processing for BF when very many antennas are available.

Patrick Eugster, Purdue University
A Fog Architecture

By striving for a seamless landscape between end hosts and cloud data centers, the fog paradigm has the potential to more efficiently deliver existing cloud-based services and enable new presently infeasible services. However, to unleash this potential, research is required on various fronts. The specific aim of this project is to explore an architecture for delivering fog services, which includes several research components namely (a) introducing and analyzing new management, economical and behavioral models created by fog computing, (b) leveraging and extending software-defined networking for managing fog devices and resources, (c) proposing new models for resource management that can reuse fog/network/cloud infrastructure in the best way according to objective functions, and (d) exploit recent advances on partial homomorphic encryption for ensuring privacy in decentralized computation at a low overhead. The investigator team has expertise which covers all relevant topics and has ongoing projects related to several research sub-components, such as on efficient geo-distributed big data processing, secure distributed data processing, as well as on decentralized complex event and stream processing.

Sue Moon, KAIST
Using Delay-based TCP for Data Centers

With the rapid growth of data centers, minimizing the queueing delay at network switches has been one of the key challenges. In this work, we analyze the shortcomings of the current TCP algorithm when used in data center networks, and we propose to use latency-based congestion detection and rate-based transfer to achieve ultra-low queueing delay in data centers.

Peter Svensson, Norwegian University of Science and Technology
Perceived multimodal quality in telecollaboration

Telepresence and video conferencing systems have become important communication and collaboration tools for professionals located far apart. The office of the future will bring more challenging acoustics to telepresence, for instance open plan offices with meeting areas and the use of personal video in such spaces. Especially for telecollaboration sessions of longer duration, listener effort and resulting fatigue is suspected to be a factor that greatly impacts perceived quality and user satisfaction. However, how exactly fatigue is caused in such situations has not been examined thoroughly. In this research proposal we concentrate on the effect of audio quality on listener effort and general user fatigue in multimodal telecollaboration situations. We assess the impact of different types of audio quality impairments on perceived quality and listening effort, by means of novel subjective assessment methods to be developed in this project work. The benefits of the research work proposed here are manifold, most importantly it will help to improve the performance and acceptance of teleconferencing and telecollaboration systems in general, based on more thorough knowledge about the effect of different types of audio quality impairments on listening effort and overall fatigue. The advanced subjective assessment methodologies developed here will be useful beyond the scope of this project.

Lynford Goddard, University of Illinois at Urbana-Champaign
Year 3 Renewal of The CISCO UCS for Engineering Design: Applications in Electromagnetic and Photonic Simulations

The PI and his team request continued support for the third year of a three year project to apply the unique architecture of CISCO's Unified Computing System to demonstrate that this computing medium is effective for advanced parallel Finite Element Method (FEM) algorithms. We have successfully implemented our Finite Element Tearing and Interconnecting (FETI) software package using Open Message Passing Interface (OpenMPI) on the CISCO Arcetri cluster. We modified the FETI algorithm to handle resonant structures and added a perfectly matched layer (PML) implementation. We presented our method for spectrally resolving closely spaced resonant modes of a high-Q microring resonator based on the modal expansion method at IEEE Photonics Conference and have a paper under review which includes our very small-scale (r=0.5 micron) 3D simulation results using a single node of the CISCO Erehwon cluster. We will discuss our work on nonconformal FETI and nested FETI/FETI-DP (dual primal) using the Erehwon cluster during two presentations at IEEE Antennas Propagation Symposium and also have a paper under review on this work. After successfully implementing FETI on multiple nodes of Arcetri, we performed medium-scale (r=10 micron) 3D simulations. We measured resource usage: computational, memory, and communications and how they scale with size of the finite element method (FEM) problem. Finally, we performed a full-scale (r=30 micron) 3D simulation on 271 processors across 19 Pacini nodes of the Arcetri cluster. The computation required 1.25 TB of memory and took just over 1.5 hours to run at a single frequency. In year 3, we plan to develop an analytical scaling model for resource usage and validate the model with benchmark testing, implement the Charm++ software redesign, optimize the code for scalability and speed, and simulate more complex problems such as larger photonic structures and red blood cell flow in capillaries.

Saleem Bhatti, University of St Andrews / Computer Science
IP Scalability Through Naming III (IP-STN III) – Distributed Site Border Router

An approach in the Identifier / Locator (I/L) solution space for future networking is the Identifier-Locator Network Protocol (ILNP). This proposal is to build upon the successful output of two current URP projects to show ILNP's capability as a dynamic, scalable, distributed router. A prototype implementation of the core functionality of an end-host ILNPv6 implementation has already been demonstrated. A version of that code-base is currently being modified to demonstrate an ILNP-capable site border router (SBR). This proposal will extend that work to show a distributed SBR. Use of ILNP as an SBR would allow multi-homing to be controlled from the end-site and to be implemented without any additional routing table entries in the DFZ. A distributed SBR would further allow scalable routing and connectivity from an end-site, for example as a resilient router system, or as part of an integrated data-centre/cloud connectivity management system.

Roch Glitho, Concordia University
Towards Cloud Architectures for Cost Efficient

Applications and Services Provisioning in Wireless Sensor Networks More and more IP wireless sensor networks (IP WSNs) are being deployed. Unfortunately provisioning (i.e. developing, deploying and managing) applications and services in them remains cost inefficient. Applications and services are still embedded in these networks, leading to cost inefficiency through the proliferation of redundant networks. Provisioning platforms also do not enable cost efficiency because they do not offer application programming interfaces (APIs) suitable to users with different levels of technical skills, and they do not enable easy re-use of third party components. This proposal aims at the design, the prototyping, and the performance evaluation of cloud architectures for cost efficient applications and services provisioning in IP WSNs. Research on cloud architectures for service provisioning has so far focused on conventional/traditional networks and the results are of little relevance to challenged networks such as IP WSNs due to the peculiarities of the latter (e.g. resource constraints). On the one hand, we will propose virtualization architectures to enable cost efficient provisioning through dynamic allocation and sharing of IPWSNs' resources. On the other hand we will introduce services and applications provisioning platforms that will enable third party component re-use and raise the level of abstraction of the virtual IPWSN resources in order to cater to the needs of different categories of users. Prototype demonstrations will showcase this unique framework, highly ranked conference and journal papers will present the results, and source code contributions will be made to relevant open source code initiatives.

Gail-Joon Ahn, Arizona State University
WebChameleos: Towards Secure and Trustworthy Web Browsing over HTTP 2.0

In this project, we focus on HTTP 2.0 SPDY protocol, which can bring a few core concepts of web delivery that HTTP was sorely lacking. The objective of this project is to examine and assess the specification of SPDY while comparing and analyzing the current implementations of SPDY. Our goal is to build a SPDY-enabled testbed and security assessment toolkits. We call our testbed as \emph{WebChameleos} that is capable of performing security assessment on various open-source SPDY implementations. The systematic toolkits for \emph{WebChameleos} will be designed and developed. The vision of future web-based systems is a complex and highly sophisticated one that requiring ongoing research and analysis to continue concurrent with the changing role and face of digital information creation and usage in cyberspace.

Kang L. Wang, UCLA
Spin Transfer Torque and Magneto-electric MeRAM for High-Performance Ultralow-Power Nonvolatile Memory Systems

This project aims to develop ultralow-power, high-speed spintronic memory which can be integrated with CMOS in terms of both operation and in foundry fabrication. This type of memory may be used standalone or in embedded applications, replacing volatile SRAM used today for high performance systems. A physics-based compact model of STT and MeRAM memory devices will be developed and it will be calibrated and tested using measurement data from real devices fabricated in-house. The compact model will be used in circuit level simulations to explore the possibility of using FPGA as a demonstrator to examine the benefits of incorporating this type of memory as compared with conventional FPGA. We will examine various levels of design-in benefits, including the programmable logic, memory using STT and MeRAM

Kuo-Shen Chen, National Cheng-Kung University
Stress / Deformation Modeling and Analysis for ITO Membrane in Film Type Touch Panels

This project aims to develop a model based on rectangular plate theory for addressing the stress/deformation behavior of touch panels and to developing engineering solutions for overcoming the observed functional failures in several Cisco touch screen products due to earlier touching caused by large deformation.

Meanwhile, essential mechanical properties will also be characterized in conjunction with the plate model and we will employ finite element method for numerical analysis, it is possible to finding the key manufacturing and assembling factors for dominating the failure behavior.

Currently, major control issues to be considered are: residual stress/strain existed in touch pads, deformation of rubber gaskets, the effect of gluing and back screwing, the assembly errors, and the applied load and temperature. It is expected taht through the effort of this project, it is possible to solve the problems faced in current Cisco product and to provide key insight for optimizing processing route for future related products.

Juan Li, North Dakota State University
Efficient Knowledge Management and Service Request Resolution for Service Centers

Enterprise service centers receive large number of Service Requests (SRs) from customers and partners. Accurate and timely delivery of pertinent information to assist SR resolution is critical for providing the highest levels of service. Although enterprises normally store large volume of service-related data, very few organizations are able to effectively utilize this "big data" to resolve SRs. In this project, we propose to use machine learning, data mining and semantic web technologies to gather, structure, and deliver knowledge at critical points for service request resolution. We will develop effective knowledge management solutions to transform the mountain of data into usable knowledge or Intellectual Capital (IC) that can be utilized by service management solutions.

Joachim Taiber, Clemson University
Wireless power connectivity between smart grid and electrical vehicles

The communication requirements for successful implementation of WPT involve not only talking with the charger in the road side but also implementation of a global communication infrastructure. Charging of EV/HEV can happen in two modes: 1) Using a conductive charger, and 2) using wireless charging, either scenario can be translated as a certain amount of interactive capacity and interactive power requirements. This proposal is focused on wireless power transfer.

There exist two problems in the wireless charging process that need to be addressed: 1) How to deliver the amount of power required by the charging station without affecting the grid performance, and at the same time introducing the renewable energy sources characteristics, and 2) how to monitor and handle the overall charging process.

For the first problem stated, the capacity and power requirements could be manipulated to improve the grid characteristics: the EV/HEV could work a spinning reserve when parked and charged. For example, in case a generator fails or the power demand is too high, EV/HEV will behave as source and power will be withdrawn from their ESS to compensate the grid. Also in the other way, EV/HEV can function as a variable load to absorb fluctuations of the output power of renewable energy sources. For this approach, a communication infrastructure that gathers and acts on information about the vehicle location and state that are connected to the grid is needed.

For the second problem, the charging system relies on wireless communication between the vehicle and the charging station to monitor the process. But it also needs to communicate with wide area networks to provide services (billing, localization of charging spot, route planning, etc). Moreover, for the dynamic wireless charging case, where the vehicles are being charge while in motion, communicating with the other vehicles in the charging lane is as important as communicating with the infrastructure. Since decisions need to be made fast while changing access points distributed control strategies are needed.

Ashish Goel, Board of Trustees of Leland Stanford Junior University
SOAL at Stanford: The SOcial Algorithms Lab

NOTE: We have included this information in the main pdf of the proposal as well, which is more professionally formatted. We would request the reviewers to look at that version, if possible.

Online networked social interactions between groups and individuals are having a transformative impact on society; Cisco's own ""Internet of Everything"" vision is a testament to these changes. This proposal is for the creation of SOAL, the SOcial Algorithms Lab at Stanford. Our lab is inspired by the following complementarity between human and computational power in shaping online social interactions. On one hand, computational power serves as a wide conduit to engineer richer and novel networked social interactions, allowing us to overcome traditional frictions imposed by a lack of coordination on time, location, or both. On the the other hand, networked social interactions allow us to provide richer computational solutions: the classic example is collaborative filtering, where the individual decisions of many serve to improve the experience of a single individual---though many other such examples abound.

Our lab aims to focus on the interaction between networked social systems and algorithms. Our goal is to design the right primitives for online networked social systems: enabling effective communication; aligning incentives; and ultimately, leveraging the amount of information being generated to engineer better interactions. Our vision is that human and computational power can amplify each other in service of these goals. Our team combines core strengths in graph algorithms, probability, optimization, microeconomics, and market design with applied work in a number of relevant areas, from social search to online education and online work.

Yonggang Wen, Nanyang Technological University
Cloud-Assisted Task Offloading for Collaborative Applications in Fog Computing

The tussle between resource-hungry applications and resource-poor mobile devices has stand out as a critical aspect for mobile platforms. In this research, we propose to leverage cloud computing technologies to extend the capabilities of mobile devices for resource-hungry applications, in the context of fog computing. Specifically, each mobile device is associated with a system-level clone in a proximate cloud. For example, the cloud infrastructure could be an access point nearby or a data centre in an enterprise. The mobile clone runs on a virtual machine (VM) and it can migrate to a location that best serves its associated physical device. Moreover, the mobile clone harnesses extended resources in its VM environment; and if needed, it can delegate complicated tasks to a remote cloud infrastructure that supplies almost unlimited resources. Under this framework, the research thrust is decoupled into three sub-tasks, including application execution policy, platform-independent middleware and security. Finally, the result will be applied to a collaborative application, Cloud Social TV, to verify its effectiveness.

Ramesh Govindan, University of Southern California
Automated Interoperability Testing for Next Generation Web Protocols

The robustness of the Internet and its applications is in large measure dependent upon the behavior of protocol implementations. As we move towards next-generation Web technologies, this dependence has become even more critical: these technologies support commerce and social media over the Internet, and the correctness and performance of protocols designed for the next-generation Web critically impacts many aspects of daily human lives.

An important aspect of robustness for protocols is protocol interoperability. Informally, interoperability refers to the correctness property that an implementation should be able to properly parse and interpret messages sent by another implementation. Given its importance, protocol developers today spend significant effort in testing the interoperability of their implementations. Such testing is highly valuable, but, being manual, it is also highly incomplete. As a result, interoperability issues continue to frustrate developers, as well as the operators of systems that use them, even years after protocols have been fully deployed.

Our goal is to automate the discovery of interoperability problems in implementations of protocols for the next-generation Web. We have developed an approach to achieving this goal using advanced program analysis techniques, and the initial results are promising. We propose to extend the kinds of interoperability problems we are able to identify, and to enhance the scalability and precision of our approach to the point where it can be used effectively to analyze real-world protocol implementations of the next-generation Web.

Deborah M Gordon, Stanford University
Ant colony algorithms for large-scale distributed systems

Distributed networks, such as ant colonies and the internet, operate without central control. Ant distributed algorithms use shared resource allocation (where the resource is ants) to explore and exploit the environmental resources such as food and nest sites. The proposed work will identify the algorithms used by ant species that have evolved in an environment in which connectivity tends to be transient, and whose algorithms show high scalability to deal with dynamic, on-demand situations in which resources and traffic occur in bursts. Ant protocols combine aspects of both loss- and delay-based protocols, but use much less information. Because these are systems with little state, they are easily restored when disrupted. The goal is to consider where the analogies to data networks hold up and how to exploit the useful correspondences.

Edward W. Knightly, William Marsh Rice University
Joint Traffic and Spectrum Management from MHz to GHz

This project targets to exploit diverse spectral ranges to realize wide spectrum WiFi systems spanning from 100's of MHz to GHz. Sub-GHz and UHF bands are the undisputed ""beach front property of spectrum"" due to their capability to provide kilometer-scale links without requiring line-of-sight communication and with moderate-sized antennas. On the other hand, GHz bands are the ""autobahn of spectrum"" enabling transmission rates of Gb/sec and beyond. Joint use of these diverse bands yields an unprecedented opportunity for performance and network capabilities. In this project, we will explore protocols and tools for accessing the right spectrum for the right traffic. We will deepen our understanding of UHF-band wireless networking through deployment and war driving and will perform an early trial for urban-scale measurements and protocol development that jointly utilizes unused UHF bands as well as legacy 900 MHz, 2.4 GHz, and 5 GHz bands.

Joseph Paradiso, MIT Media Lab
Intelligent Multimodal Sensor Networks for Ubiquitous Computing

The devices in our environment are becoming increasingly sensor rich and deeply networked. This provides an unprecedented opportunity to combine the multimodal sensor data from nearby devices to generate a more accurate estimation of the true state of an environment. This one year project is focused towards developing an intelligent multimodal sensor network that can dynamically aggregate the real-time data from a wide range of heterogeneous sensors. We propose to develop a hierarchical framework designed to aggregate multimodal sensor data and apply machine-learning algorithms to this data to make intelligent estimations about the current state of an environment. This project has two significant benefits for Cisco Research. First, we will conduct fundamental research in multimodal sensor fusion. Second, we will develop an intelligent sensor framework that can be directly integrated with other Cisco applications and systems.

Bin Liu, Tsinghua University
Cost-effective Memory Architectural Design for Content Router

Content Router is the one in NDN that is enabled to cache content to serve subsequent requests. Though the space of current DDR3 memories and disks could be very large, it is still too small compared with the content size that traverses a router. Therefore, Content Router requires a cost-effective memory/storage architecture that selectively caches contents it witnesses. How to efficiently cache contents in limited memory size and achieve high cache hit rate remains untouched in previous router architecture studies. Currently, there is no router architecture that supports large-scale and long-time caching. Designing a new architecture (mainly the memory architecture) for Content Routers, accompanied by proper caching policies and scheduling techniques is under an urgent demand.

Henry Fuchs, The University of North Carolina at Chapel Hill
3D Informal Room-sized Telepresence System

We propose to improve a room-sized telepresence system for informal use by multiple individuals in settings such as break rooms, cafes, and lobbies. At each site the system consists of a wall that is tiled with large 2D displays and a set of miniature video cameras. The display walls at each site serve as electronic windows between the two sites. A key challenge is to generate convincing, natural, and useful imagery of the remote site for the multiple individuals at each site. In contrast to most current teleconferencing system in which participants sit in specified positions, in the proposed system individuals can be widely distributed, and can freely move around over a large physical area. Thus they see the remote site's imagery from widely different viewing positions. Naive solutions to this challenging problem have proven unsatisfactory. For example, if a wide-angle panorama of each site is provided to the other, even simple activities such as an individual at one site talking to one at the other site are not supported properly. The failure is caused by lack of proper eye contact unless both participants happen to be positioned directly at the center of each panorama. We propose to investigate novel solutions to this problem by continuously acquiring not just a 2D panorama of each scene, but acquiring the full 3D scene using a combination of commodity depth cameras and conventional color cameras. We plan to generate a constantly updated 3D surface description of the entire room an all its inhabitants as well as all the static and moving objects. From this 3D data, we plan to generate an optimal view for the remote participants. Such images may be rendered with a new method that we call "depth-dependent perspective", an approach that generates an apparently undistorted panorama that also supports proper eye contact for all the most likely locations in the two scenes.

Li Chen, University of Saskatchewan
Study Single Event Effects with ST 28nm FD-SOI Technology

Silicon-On-Insulator (SOI) technologies especially fully-depleted (FD) SOI are much more tolerance to single event effects (SEE) compared to bulk ones due to the thin and isolated transistor substrate, which makes FD-SOI an attractive option for silicon industry. We propose to develop a number of storage cells (hardened and non-hardened) and characterize their SEE performance using the most advanced FD-SOI 28nm CMOS technology from STMicroelectronics, and compared the results with those of bulk CMOS 28nm technologies to evaluate/confirm the SOI SEE-tolerance. We will also develop fault-tolerant combinational logic circuits using various techniques, such as differential logic gates and charge sharing/cancelling techniques in the layout. A test chip including various test structures will be designed and fabricated. The fabricated test chip will be tested to verify its functionality, and then laser, proton, neutron and heavy ion testing will conducted. The project will provide valuable information to inform the industry for improving microelectronics reliability.

Michael Wirthlin, Brigham Young University
Self Recovery using Algorithmic Partial TMR (A-pTMR)

Triple-modular redundancy (TMR) is a well-known design technique for mitigating the effects of single event upsets (SEU) on FPGAs. Full circuit TMR, however, is very expensive and is not feasible for use within commercial systems. This proposal will investigate and develop algorithms for implementing partial TMR (pTMR) within FPGA circuits. These algorithms will identify the most sensitive circuit regions of an FPGA system and apply TMR to those regions in an attempt to reduce the SEU failure rate of a system with as few resources as possible. Experiments will be performed on actual FPGA-based systems to verify the effectiveness of algorithmic partial TMR.

Mark Zwolinsk, i University of Southampton
Aging Mitigation in Advanced CMOS

As feature sizes in advanced CMOS processes are reduced to 35nm or less, significant aging effects are observed. These effects have a profound impact on the reliability of integrated circuits, including processors. While some mitigation may be possible by design, the challenge is to include minimal additional circuitry to monitor the state of a system and to anticipate impending failure. The aims of this project are: to gain a deeper understanding of the failure mechanisms and their relation to the manufacturing process; to propose on-chip monitoring structures; to propose appropriate error detection schemes; and to develop reconfiguration and graceful degradation strategies. These aims must be balanced against competing design objectives: area, delay and power. We will explore the incorporation of our proposals into a commercial processor, typically and ARM device.

Robert Reed, Vanderbilt University
Establishing the Framework for Monte-Carlo Calculation of the Muon Flux in the Earth's Atmosphere

Although muons were among the first elementary particles to be identified (1937), analyzing the potential emerging threat to semiconductor operations posed by muons generated by cosmic rays poses new challenges both for measurement and computation. Unfortunately the low-energy end of the energy spectrum of muons at the Earth's surface (or at higher altitudes) is the least well known and is the most important for estimating the soft error rates for ICs. THis project will construct a simple point-symmetric model of the Earth with the atmosphere describe in approximately 90 uniform 1 km layers. Typical cosmic ray spectra containing all significant cosmic ray components, as obtained from CRÈME, will be used to simulate particle showers in the atmosphere and the mu+, mu-, and neutron spectra (neutrons for comparison and as a consistency check) will be computed by Monte Carlo simulation for various altitudes and angles relative to the local normal of the Earth's surface. By estimating the low-energy muon fluxes, additional complexities (such as muon propagation in models of buildings) can be carried out for accurate FIT predictions.

Andrea Fumagalli, The University of Texas at Dallas
Disaster Recovery in GMPLS Networks: a Combined Layer 0 and Layer 3 Perspective

Label Switching Path (LSP) services constitute a fundamental and vital element of today's Internet. (Layer 3) LSPs are provisioned over (Layer 0) lightpaths, which are established across wavelength division multiplexing (WDM) optical networks. The objective of this project is to investigate and identify the best practices in achieving reliable LSP services despite the possibility of unpredictable disaster scenarios. As a general rule of thumb, Layer 3 recovery techniques are more bandwidth efficient, while Layer 0 recovery techniques are faster [4]. When considering disaster scenarios, recovery techniques that are implemented in a single layer might not be as effective as recovery techniques that are jointly implemented across Layer 3 and Layer 0. Control strategies for cross-layer recovery techniques must be identified in order to account for: 1) layer 3 and layer 0 coordination, 2) race conditions, and 3) physical impairments in layer 0.

The project will be considered a success if at least one of the identified recovery techniques will be considered for adoption by Cisco and/or the simulation platform developed within this project will be adopted by other scientists, besides the team developing the tool at UTDallas.

Stefan Lucks, Bauhaus Universität Weimar
MIRACLE: MIsuse Resistant Authenticated Ciphers for low-Latency and Embedded systems

Authenticated Encryption solves several problems in the use of cryptography, providing both privacy and authenticity/integrity, and maintaining good performance. But current methods do leave some problems outstanding, including problems with nonce misuse, decryption misuse, and the size of data on the wire. Nonce misuse is problematic in some virtual machines environments, where machine cloning is possible. Decryption misuse is a significant issue at high data rates, such as those in optical networks. Data overhead must be minimized in low-power wireless environments, which are becoming increasingly common. In some constrained implementation environments, which are relevant to Internet of Things, block ciphers may perform significantly worse that hash functions. It appears to be possible to address these issues by applying new cryptographic design ideas, but it is still an open problem to identify precise designs that meet these needs while also being secure, combined with good efficiency.

Krishnendu Chakrabarty, Duke University
Advanced Testing and Design-for-Testability Solutions for 3D Stacked Integrated Circuits

Three-dimensional stacked integrated circuits (3D SICs) based on through silicon vias (TSVs) promise to overcome barriers in interconnect scaling, thereby offering an opportunity to achieve higher performance using CMOS.

This three-year research project is investigating 3D SIC testing in a systematic and holistic manner. The research scope includes three topics: test content and test access, test cost modeling, and built-in self-test (BIST).

Amogh Dhamdhere, University of California, San Diego
Volume-based Transit Pricing: Which Percentile is Right?

Internet Service Providers (ISPs) typically use volume-based pricing to charge their transit customers. Over the years, the industry standard for volume-based pricing has been the 95th-percentile pricing method: the 95th percentile of traffic volume across 5-minute samples is used to calculate the total transit payments billed to the customer. This method provides an estimate of the maximum traffic rate that the customer could send -- useful for capacity planning in the provider's network -- while also allowing the customer to send transient bursts of traffic in a few 5-minute intervals.

But the method is, by design, sensitive to changes in traffic composition and characteristics, and we have seen dramatic changes in both over the last decade. The evolution of such a diverse and dynamic ecosystem suggests the need for an empirical and computational exploration of the effectiveness of the 95th percentile charging approach on provider cost recovery, and more specifically how varying the charging percentile across peers would influence this effectiveness.

We propose two related research tasks. We will first perform a historical study of traffic patterns between neighboring networks, to understand the relation between the 95th percentile and other measures of the traffic distribution (e.g., median, mean, or peak), and how trends in these parameters influence the economics of 95th percentile pricing. Second, we propose to study the use of the same 95th percentile charging on a per-customer basis. We hypothesize that the appropriate charging percentile depends on the volume distribution of the traffic sent by a particular customer network, which in turn depends on the business type of that network, its size, and user base. We propose to develop an algorithm by which a provider can select the charging percentile on a per customer basis to optimize its cost recovery. A provider could use this optimization framework to selectively offer discounts, i.e., a lower percentile than the 95th, either to attract new customers or retain existing customers.

The provider could also use this framework to price its service in a way that more accurately reflects the costs it incurs. The same computational framework is applicable to the study of emerging volume-based pricing schemes for residential customers, as well as to explore the debate over network neutrality and broader questions of appropriate business models for Internet data transport.

Dawn Song, University of California, Berkeley
Automatic Protocol State-Machine Inference for Protocol-level Network Anomaly Detection

Next-generation network anomaly detection systems aim to detect novel attacks and anomalies with high accuracy, armed with deep stateful knowledge of protocols in the form of protocol specifications. However, protocol specifications are currently mainly crafted by hand, which is both expensive and error-prone.

We propose a new technique to automatically infer protocol specifications given one or more program binaries implementing the protocol.

Our technique is a synergistic combination of symbolic reasoning on program binaries and active learning techniques. In contrast to existing techniques, our proposed technique can automatically infer complete protocol specifications with no missing transitions, enabling anomaly detection with high accuracy.

David Wetherall, University of Washington
Web Privacy Using IPv6 and the Cloud

We propose to target Web usage and develop a privacy architecture that spans both network and application layers, as is needed for privacy in practice. Our key idea is to leverage the large IPv6 address space to separate user activities all the way from different profiles within the browser to different IP addresses within the network, and to leverage the cloud for low-latency IP address mixing. This will allow users to create separate browsing activities on a single computer that can't be linked by third parties. We believe that these ideas can greatly raise everyday privacy levels while keeping forwarding efficient.

Ingolf Krueger, UC San Diego
Application Aware Security in the Data Center

The complexity of today's distributed systems, as well as their dynamic nature, has rendered securing distributed applications increasingly difficult. We believe that overcoming these difficulties requires a global view of the distributed system. Such global view enables detecting anomalies in the orchestrated behavior of a distributed application from the network. Currently, network security devices such as firewalls and intrusion detection systems target network security, by observing only the well-defined network protocol signatures, or performing coarse-grained anomaly detection based on general traffic characteristics. These systems provide minimal application-level security, because they are not aware of the logic of the custom applications deployed over the network. We propose enhancing the network to offer more application-aware services. In particular, we propose to investigate opportunities for enabling application-aware network security inside the data center to detect attacks and anomalies. As a result of this research, we expect to be able to deploy monitors in the network that provide a distributed application-aware security service. This will enable system and network administrators to detect and isolate security breaches and anomalies in a timely manner.

Eby G. Friedman, University of Rochester
3-D Test Circuit on Power Delivery

The effective operation of 3-D circuits strongly depends upon the quality of the power delivered to the system, specifically power generation and distribution. There are two primary challenges to the development of an effective power generation and distribution system. The first challenge is to generate and distribute different and multiple supply voltages since diverse circuits and technologies require different voltage levels. The number and variety of these voltage levels within a heterogeneous system can be significant in modern multi-voltage systems. The second challenge is to guarantee signal integrity among these circuit blocks. We are proposing a test circuit composed of a series of test structures to examine several aspects of both power generation and power delivery in 3-D integrated systems. This test circuit will investigate and quantify fabrication, thermal, memory systems, signal integrity, and power integrity issues in 3-D integrated circuits. The test structures will provide insight into 3-D IC design, manufacturing, and test. Based on results from the test circuit, models for signal and power integrity, thermal effects, and reliability will be developed and calibrated. The test circuit will be composed of six types of test structures, where the first test structure examines power generation in 3-D structures, the second test structure investigates power distribution in 3-D structures, the third test structure focuses on interface circuits for interplane power delivery, the fourth test circuit examines memory stacking, the fifth test structure examines the effect of thermal cycling on TSVs, and the final test structure will characterize the electrical properties of through silicon vias (TSV).

Krishnendu Chakrabarty, Duke University
Advanced Testing and Design-for-Testability Solutions for 3D Stacked Integrated Circuits

Since testing remains a major obstacle that hinders the adoption of 3D integration, test challenges for 3D SICs must be addressed before high-volume production of 3D SICs can be practical. Breakthroughs in test technology will allow higher levels of silicon integration, fewer defect escapes, and commercial exploitation.

This three-year research project is among the first to study 3D SIC testing in a systematic and holistic manner. Existing test solutions will be leveraged and extended wherever possible, and new solutions will be explored for test challenges that are unique to 3D SICs. A consequence of 3D integration is that there is a natural transition from two "test insertions" (wafer sort and package test) to three or more "test insertions", where additional test insertions correspond to the testing of the stack after assembly?the stack may need to be tested multiple times depending on the number of layers, test cost, and the assembly process; we call it known good stack (KGS) test. Therefore, there is an even greater emphasis on the reuse of DFT infrastructure to justify the added cost and to ensure adequate ROI.

The proposed research includes test content development, test cost modeling, and built-in self-test (BIST)

Timothy G. Griffin, University of Cambridge
IGPs with expressive policy

Link-state routing protocols are often desirable because of their relatively fast convergence.

This results from each router using an efficient path computation, such as Dijkstra's algorithm, rather than the slower "path exploration" method associated with all distributed Bellman-Ford algorithms. On the other hand, protocols in the the Bellman-Ford family, --- such as the Border Gateway Protocol (BGP) --- seem to allow more expressive routing policies. Recent results (Sobrinho and Griffin) show that Dijkstra's algorithm can in fact be used to compute solutions in many policy-rich settings. However, such solutions are associated with {\em locally optimal paths} rather than {\em globally optimal paths}, and in some cases tunneling is required to avoid forwarding loops. These results could provide the foundation for a new class of intra-domain link-state protocols for networks that require expressive routing policies. This research project intends to initiate an exploration of this space.

Edward W. Knightly, William Marsh Rice University
Joint Traffic and Spectrum Management from MHz to GHz

UHF bands are the undisputed "beach front property of spectrum" due to their capability to provide kilometer-scale links without requiring line-of-sight communication and with moderate-sized antennas. On the other hand, GHz bands are the "autobahn of spectrum" enabling transmission rates of Gb/sec and beyond. Joint use of these diverse bands yields an unprecedented opportunity for performance and network capabilities. In this project, we will explore protocols and tools for accessing the right spectrum for the right traffic. We will deepen our understanding of UHF-band wireless networking through deployment and war driving and will perform an early trial for urban-scale measurements and protocol development that jointly utilizes unused UHF bands as well as legacy 900 MHz, 2.4 GHz, and 5 GHz bands.

Henning Schulzrinne, Columbia University
An Independent Study of Mobility and Security in a Fully Integrated Heterogeneous Network

Today, the central mobility management question is a lot more than just "mobile IP vs. SIP mobility". Unlike during the initial phases of mobile wireless data services from the 1990s, today a mobile device can be simultaneously connected to several different heterogeneous wireless and wireline technologies. The industry at large and in particular carriers, enterprise, and equipment manufacturers are keenly interested in an independent study to be carried out by a major university in order to determine the essential elements of this new mobility paradigm, with a particular emphasis on the security aspects. We would like to perform a study comprised of a survey of available tools and thence determine what protocol elements are still missing in order to facilitate delivery of these services across an evolving heterogeneous network. A prototype of the new paradigm for demonstration and experimental measurements, in a customized testbed, is an expected outcome, as well as publication of original results in respected academic journals. The participation of a scientist/technologist from the Carrier environment is fundamental in gearing the technology findings to a direct usability of the product for Cisco customers. The ultimate objective is developing an infrastructure to provide the carrier customer with a seamless mobile user experience in a secure manner over a virtualized network.

Bernd Tillack, Technical University of Berlin - Department of High-Frequency and Semiconductor System Technologies
Investigation of SiGe HBT Ageing and its Influence on the Performance of RF Circuits

In this project the ageing of SiGe hetero bipolar transistors (HBT) will be studied under so-called mixed-mode stress conditions (high-current and high-voltage stress beyond the emitter-collector breakdown voltage). The observed HBT degradation by hot-carrier injection will then be used to develop an "ageing function". This function models the increase of the base current and in return will be used to predict the long-time transistor degradation. Correspondingly, stress-adapted HBT compact models will be incorporated into appropriate radio-frequency circuits to simulate the influence of HBT ageing on circuit performance.

Scope and content of the work are appropriate for the master thesis of a graduate student of electrical engineering.

Haitao Zheng, UC Santa Barbara
3D Wireless Beamforming for Data Center Networks

Contrary to prior assumptions, recent measurements show that data center traffic is not constrained by network bisection bandwidth, but is instead prone to congestion loss caused by short traffic bursts. Compared to the cost and complexity of modifying data center architectures, a much more attractive option is to augment wired links with flexible wireless links in the 60 GHz band. Current 60GHz proposals, however, are severely constrained by link blockage and radio interference that severely limit wireless transmissions in dense data centers.

In this project, we propose to investigate a new wireless primitive for data centers, 3D beamforming, where a top-of-rack directional antenna forms a wireless link by reflecting a focused beam off the ceiling towards the receiver. This reduces the link's interference footprint, avoids blocking obstacles, and provides an indirect line-of-sight path for reliable communication. Under near-ideal configurations, 3D beamforming extends the effective range of each link, allowing our system to connect any two racks using a single hop, and mitigating the need for multihop links. While still in an exploratory stage, 3D beamforming links could potentially provide a highly flexible routing layer that not only reduces unpredictable, sporadic congestion, but also provides an orthogonal control layer for software-defined networking.

Ioannis Tomkos, Research and Education Laboratory in Information Technologies
Impact of flexible DWDM networking on the design of the IP+DWDM network

As IP traffic demand displays both topological disparity and temporal disparity, the need arises to cost-efficiently address these effects in a scalable manner. Emerging technologies in optical communication systems are enabling dynamic bit-rate adaptation and addressing inherent limitations of rigid fixed-grid single carrier WDM networks. In fixed-grid networks, a fixed amount of spectrum is assigned per connection without considering requirements in terms of capacity and transparent optical reach. This results in great inefficiencies. From a capacity perspective, it leads to either under-utilized connections or to the introduction of an extra layer (OTN) to achieve a higher utilization. From a reach perspective, either an over-engineered short reach connection is assigned or the deployment of expensive optical-electrical-optical regenerators becomes a necessity in order to achieve longer reach. Thus, the need arises to introduce alternative technologies enabling the flexible assignment of spectrum. To this end "FlexSpectrum" architectures are proposed, which allow to tradeoff the capacity and the reach with the utilized spectrum per connection. These architectures utilize recent advances in optical orthogonal frequency division multiplexing and Nyquist-WDM, which have set the stage for envisioning fully flexible optical networks that are capable of dynamically adapting to the requirements of each connection. Hence, so called "superchannels" are formed - consisting of a set of tightly spaced optical subcarriers. The modulation level, the symbol rate, and the ratio of FEC and payload can all be adjusted according to the required bit-rate and reach. However, the setting of these different degrees of freedom is not straightforward, as they are to a great degree intertwined. For example, a higher bit-rate can be achieved by moving to higher multi-level modulation formats (such as PM-QPSK) - but this comes at the expense of reduced reach. A higher reach can be achieved by increasing the symbol rate - but this translates into greater spectrum requirements. By increasing the percentage of FEC, a greater reach can be achieved without increasing the utilized spectrum. This, however, comes at the expense of a reduced payload (i.e. reduced bit-rate). It is hence clear that multiple tradeoffs are occurring when trying to meet the requirements at the connection level.

However, while on a static connection level the benefits in terms of spectrum savings of FlexSpectrum architectures are evident, new constraints and challenges are introduced when examining a realistic network-wide scenario. Especially when the justification of deploying new technologies is directly related to economic benefits, more holistic techno-economic modeling and studies are key to this analysis. As the introduction of new capabilities imposes requirements on the network equipment, additional costs are introduced in the network. For example, the introduction of a FlexSpectrum sliceable transceiver, which can be effectively sliced into multiple "virtual tranceivers" each accommodating different connections, incurs more costs as its degree of flexibility increases. Additionally, more complicated and cost-intensive optical switches (ROADMs) may be required to be deployed. Considering the introduced complexity in flexible networking scenarios, we plan to conduct extensive techno-economic analysis in different network settings in order to identify potential benefits and occurring trade-offs. In order to address the issue that not yet commercially available network equipment will be considered, the sensitivity to variations of such costs will be examined. Additionally, the "break-even" point when comparing alternative scenarios will be identified.

Current research results in flexible optical networking have focused on the optical layer - addressing mostly technological aspects of the physical layer and savings in terms of spectrum. In contrast to current research results, we consider both the optical and the IP layer. We plan to use an industry leading tool (MATE) for proper modeling of the IP layer behavior. We expect that the additional consideration of the IP layer, will lead to highly relevant and interesting results - as a holistic and realistic view of the overall benefits of flexible optical networking will be examined. Basic assumptions will be challenged - as in "O. Gerstel, et al., Increased IP Layer Protection Bandwidth Due to Router Bypass, ECOC 2010", where the assumption that protection bandwidth in routers is independent of the amount of bypass is discredited. While a more meshed network of small pipes may appear beneficiary from a pure optical perspective, it may incur extra costs if the IP layer is considered. To the best of our knowledge no previous work exists that jointly considers the interaction of the IP and the flexible optical layer, offering insight into the overall incurring costs - especially under failure events in the IP layer.

In the following, we highlight the main points of our proposal. As multiple degrees of freedom become available, the question as to what the actual flexibility needs are arises. Considering that transparent reach requirements are critical to the selection of the transmission technology characteristics, we aim to identify a relationship between the degree of flexibility and the considered physical network topology. We go one step further and examine the design of the IP topology itself. Different strategies to design IP topologies can be developed, leading to a different degree of connectivity between the routers. This translates into a different traffic demand for the optical layer, which in turn affects the transparent reach and capacity requirements. Hence, we aim to examine the circumstances under which additional flexibility yields improvement in terms of performance (e.g. utilized spectrum, CAPEX) in a multi-layer setting by performing techno-economic analysis.

Douglas Comer, Purdue University
Increasing Reliability Of High-End Routers Through Self-Checks By AnIndependent Observer

High reliability is a key requirement for tier-1 service providers. Because it handles traffic at the core of the Internet, a tier-1 network operates continuously and experiences steady traffic.

To handle high traffic loads, routers used in tier-1 networks employ multiple control processors distributed among multiple line cards in multiple chassis. Thus, the control plane within such a router can be viewed as a distributed system. To achieve high reliability, the control processes must employ techniques from parallel and distributed computing which will permit them to coordinate the various control functions so the overall system operates as single, unified entity.We propose to investigate a technique for increasing reliability in such systems: the use of an outside observer that can monitor the operation and verify that the outcomes are aligned with expectations.

Ramana Rao Kompella, Purdue University
Increasing Reliability Of High-End Routers Through Self-Checks By AnIndependent ObserverTowards Better Modeling of Communication Activity Dynamics in Large-Scale Online Social Networks

Online social networks (OSNs) have witnessed tremendous growth and popularity over the recent years.  The huge success and increasing popularity of social networks makes it important to characterize and study their behavior in detail.

Recent work in analyzing online social network data has focused primarily on either static social network structure (e.g., a fixed network of friendship links) or evolving social networks (e.g., a network where friendship links are added over time).  However, popular OSNs sites provide mechanisms to form and maintain community over time by facilitating communication (e.g., Wall postings, e-mail), content sharing (e.g., photographs, videos, links), and other forms of activities. Analyzing social activity networks formed by observing who engages with whom in a particular activity is an important area of research that has received very little attention to date.

This goal of this project is to develop methods to support the analysis and modeling of activity networks. We propose to address several weaknesses of previous related work with respect to this goal: (1) Although researchers are beginning to focus on activity networks, most studies have focused primarily on friendship networks and as such, activity networks have not been studied as well. (2) Sampling approaches have either focused on static graphs or graphs that are fully observable, neither of which is appropriate for large-scale streaming data common in activity networks. (3) Evaluation of sampling algorithms has focused on the accuracy of graph metric estimation---there has been little work on developing a theoretical framework for the task and on understanding the theoretical tradeoffs between preserving various properties of the graph. (4) Research on network evolution models has focused on the underlying network structure (i.e., friendships) rather than on activity traffic that is overlaid on the existing network structure. In this project, we propose to address the afore-mentioned issues by developing a suite of algorithmic and analytic methods to support the characterization and modeling of activity networks. In particular, our key contributions are : (1) We will conduct extensive static and temporal characterization studies of social network activity, comparing the observed properties to those of the underlying friendship network. (2) We will study sampling techniques that can preserve graph properties for different communication activity graphs. (3) We will investigate the fundamental theoretical trade-offs between preserving different properties of the graph and investigate different types of sampling errors to either adjust or correct for bias. (4) We will develop procedural modeling techniques to generate social network activity graphs conditioned on the underlying network structure and profile information to better represent the temporal dynamics and burstiness of activity patterns.

Frank Mueller, NC State University
A Benchmark Suite to Assess Soft Routing Capabilities of Advanced Architectures

Advanced architectures provide novel opportunities to replace costly ASICs currently used for routers. Such commodity architectures allow routing to be performed in software. The objective of this work is to define metrics and create a benchmark suite that automatically derives quantitative measurements to allow different architectures to be compared as to their suitability for soft routing. The goal of this project is to identify suitable metrics, develop test kernels and package the resulting software so that measurements can be easily obtained on generic architectures. Our vision is that this benchmark suite be widely utilized not only by Cisco but also by hardware vendors to further the capabilities of their architectures by providing specific ISA support for soft routing. Such a development would result in a competitive marketplace for architectural support of soft routing and could significantly contribute to cut costs for future routing development.

Tony O'Driscoll, Duke University
Unified Communications Interfaces for Immersive Enterprise Collaboation

This project seeks to make a fundamental contribution to the field of human centered computing by exploring new interface approaches for managing advanced unified communications services within 3D virtual workspaces intended to support group-based learning and problem solving among cohorts within a university course setting.

We will integrate with external web, media, and Cisco collaboration services with immersive 3D workspace technologies to enable a well-provisioned three-dimensional wiki-space with an advanced unified communication/presence capabilities that will be tested in a live higher-educational setting.  We will explore how best to enable and optimize the virtual workspace user interface to best leverage unified communication/presence in support of group problem solving.

We will also evaluate the efficacy of different 3D user interface configurations against existing 2D unified communications/presence tools.  User interface configurations will be tested by teams of faculty and students within Duke's Cross Continent MBA (CCMBA) program at the Fuqua School of Business.We propose to deploy and evaluate a unified communications/presence-enabled 3D virtual workspace collaboration context by:

* Establishing a 3D virtual workspace-integrated unified communications/presence service so that all Cisco presence and chat services are extended to within virtual workspaces

* Developing a virtual workspace activity feed into Cisco's Quad product via a web services API so that information about defined actions within virtual workspaces are incorporated into the Quad activity stream

* Integrating video-based communication services into, and from, virtual workspace contexts to allow integration of multiple media channels into a single virtual context

* Establishing a means of data mining and reporting on user activities within virtual workspaces so that relevant tag clouds are associated with each user's avatar and to enable robust evaluation of the environment's use

* Field-testing various user interface configurations by delivering 3D collaboration contexts with integrated communication and presence services in support of team-based communication/collaboration within the Cross Continent MBA (CCMBA) program at Duke's Fuqua School of Business.

Nikil Jayant, Georgia Tech
Video Coding Technologies for Error-resiliency and Bit-rate Adaptation

This research proposal follows previous and on-going work at Georgia Tech in the area by Nikil Jayant and team. In particular, it draws upon (a) research on video streaming as performed under a Cisco RFP in 2008 and (b) research on three-screen television and a home gateway (2009-2010).

As in previous work, the proposed research includes an end-to-end view of bandwidth and visual quality. It includes the effect of compression artifacts, as introduced by video encoding and incomplete data reception, and measurement of received video quality in a subjectively meaningful way in response to the following two conditions:

  1. Adaptation to bit-rate variations as a result of changes in network bandwidth, and
  2. Incomplete video reception due to network impairments, such as burst errors.

The overall goal is to maximize perceived video quality under constraints of total bandwidth (bit rate) under these two conditions.

John William Atwood, Concordia University
Automated Keying, Authentication and Adjacency Management for Routing Protocol Security

Routers are constantly exchanging routing information with their neighbors, which permits them to achieve their primary task of forwarding packets.  To help ensure the validity of the routing information that is exchanged, it is important to authenticate the sender of that information.  Although some routing protocol specifications exist that mandate a "security option" that defines how to protect the information to be exchanged, all these specifications consider only manual key management, i.e., there is no defined procedure for automatically managing the keys used for these exchanges, or for ensuring the authenticity of the discovered neighbors.  Automatic procedures are essential if the provided security is to be effective over the long term, and easy to manage.

The problem of managing the keying and authentication for routing protocols can be cast as a group management problem with some unique requirements: potentially a very large number of small groups, robustness across re-boots, and effectiveness when unicast routing is not yet available.

We propose to design, implement, and test a distributed, automated key management and adjacency management system, for two example protocols: OSPF (a widely-deployed unicast routing protocol) and PIM-SM (the only multicast routing protocol with significant deployment).  We will use both hardware routers (CISCO) and software routers (XORP) in our implementations.  The resulting system will provide a proof-of-concept for future automated systems.

In addition, the exchanges among the components of the distributed management system will be formally validated using AVISPA, to ensure that the necessary security properties are maintained.

Chunming Qiao, SUNY Buffalo
Basic Research in Human Factors aware Cyber-Transportation Systems

A cyber transportation system (CTS), which applies the latest technologies in cyber and vehicular systems and transportation engineering, can be critically useful to our society, as it can reduce accidents, congestion, pollution, and energy consumption. As the effectiveness and efficiency of CTS will largely depend on how human drivers could benefit from and respond to such a system, this project proposes multi-disciplinary research which, for the first time, takes human factors (HF) into account when designing and evaluating new CTS tools and applications to improve both traffic safety and traffic operations. The proposed research will focus on novel HF-aware algorithms and protocols for prioritization and address routing/scheduling and fusion of various CTS messages, so as to reduce drivers' response time and workload, prevent duplicate and conflicting warnings, and ultimately improve safety and comfort. An integrated traffic and network simulator capable of evaluating these and other CTS algorithms and applications will also be developed and used to complement other ongoing experimental activities at Cisco and elsewhere.

Li Chen, University of Saskatchewan
Investigate the root cause of Intersil DC-DC convertor failure using pulsed laser

One end customer of Cisco experienced unusual high field failure rate with network linecards in a wafer fab. The root cause was traced down to a DC-DC converter supplied by Intersil Inc. In order to further locate the root cause in the converter, pulsed laser experiments are proposed to continue the investigation. The objective of the project is to use the pulsed laser facility at University of Saskatchewan to scan the DC-DC converter to identify the most sensitive devices in term of laser energy. The most sensitive devices should be the root cause of the failure observed in the field tests. We aim to publish the research results in the related conferences and journals.

Andrew Lippman, MIT Media Laboratory
Viral Transaction

We propose a focused, one-year research effort to address the architectures and applications of "Proximal networking."  We explore this in the context of transactions, which we define as instances of doing business or engaging in a negotiation of some kind in a place, with a purpose, and with people or institutions that are nearby.  Unlike pure "cyberspace" explorations, this is a realization of a means for advertising, enabling, and adding programmability to the transactions that we engage in during normal life, in real places, with real people.  Also, unlike social networks that assume an a priori connection between people realized through communications, our networks are space-dependent, time-varying, and opportunistic. 

Keith E. Williams, Rutgers, The State University of New Jersey
Infrastructure Improvements for Extracellular Matrix Research: A Computational Design

The extracellular matrix (ECM) is a complex network of collagens, laminins, fibronectins, and proteoglycans that provides a surface upon which cells can adhere, differentiate, and proliferate. Defects in the ECM are the underlying cause of a wide spectrum of diseases.  We are constructing artificial, de novo, collagen-based matrices using a hierarchic computational approach, employing a protCAD (protein Computer Automated Design) - a software platform developed in our laboratory specifically for computational protein design.  This will provide a powerful system for studying molecular aspects of the matrix biology of cancer.  Successfully designed matrices will be applied to engineering safer artificial tissues.  High-end computer networking is essential to the research proposal, as the migration of large data sets require robust network infrastructure and hardware components for various laboratory, office, and equipment locations throughout our facility.  To that end, we propose to replace our core end-of-life Catalyst 3548 100 megabits per second (Mbps) Layer 2 switches with hardware capable of 1 and 10 gigabit per second (Gbps) client-end throughput, thus increasing data clustering ten- and one hundred-fold, respectively.

Kishor Trivedi, Duke University
Availability, Performance and Cost Analysis of Cloud Systems

The goal of this project is to develop a general analytic modeling approach for the joint analysis of availability, performance, energy usage and cost applicable to various types of cloud services (Software as a Service, Platform as a Service, and Infrastructure as a Service) and cloud deployment models (private, public and hybrid clouds). Multi-level interacting stochastic process models that are capable of capturing all the relevant details of the workload, faultload and system hardware/software/management aspects will be developed to gain fidelity and yet retain tractability. Such scalable and accurate models will be used for bottleneck detection, capacity planning, SLA management, energy consumption handling, and overall optimization of cloud services/systems and of underlying virtualized data centers.

Fethi Belkhouche, University Enterprises, Inc. (on behalf of California State University, Sacramento)
Predictive Maintenance in Power Systems

This project seeks to improve fault tolerance and reliability in smart grid power systems. The approach is based on predictive maintenance to predict possible failures. This project explores new solutions from control theory and artificial intelligence, such as Bayesian prediction and machine learning. Algorithms are integrated with embedded smart sensors. Distributed computation with embedded processors and communication devices are used to predict faults and make online corrections when necessary.

Farshid Delgosha, New York Institute of Technology
Alternate Public Key Cryptosystems

We propose a novel algebraic approach for the design of multivariate public-key cryptosystems using paraunitary matrices over finite fields. These matrices are a special class of invertible matrices with polynomial entries. Our motivations for using such matrices are the existence of fully-parameterized building blocks that generate these matrices and the fact that the inverse of any paraunitary matrix is obtained with no computational cost. Using paraunitary matrices, we design a public-key cryptosystem and a digital signature scheme. The public information in the proposed schemes consists of a number of multivariate polynomials over a finite field. The size of the underlying finite field must be small to ensure the efficiency of arithmetic over the field and consequently cryptographic operations. The link between the security of these schemes and the difficulty of a computational problem is established using mathematical evidences. The underlying computational problem, which is considered to be hard, is solving systems of multivariate polynomial equations over finite fields. We have studied instances of the proposed techniques over the finite field GF(256). Our theoretical analyses have shown that the proposed instances are extremely efficient in terms of encryption, decryption, signature generation, and signature verification.

Robin Sommer, International Computer Science Institute
A Concurrency Model for Deep Stateful Network Security Monitoring

Historically, parallelization of intrusion detection/prevention analysis has been confined to fast string-matching and coarse-grained load-balancing, with little fine-grained communication between the analysis units. These approaches buy some initial speed-ups, but for more sophisticated forms of analysis, Amdahl's Law limits the performance gains we can readily attain. We propose to design and implement a more flexible concurrency model for network security monitoring that instead enables automatic and scalable parallelization of complex, highly-stateful traffic analysis tasks. The starting point for our work will be a parallelization approach exploiting the observation that such tasks tend to rely on a modest set of programmatical idioms that we can parallelize well with respect to the analysis units they are expressed in. Leveraging this observation, we will build a scheduler that splits the overall analysis into small independent segments and then identifies temporal and spatial relationships among these segments to schedule them across a set of available threads while minimizing inter-thread communication. We will implement such a scheduler as part of a novel high-level abstract machine model that we are currently developing as an execution environment specifically tailored to the domain of high-performance network traffic analysis.  This model directly supports the field's common abstractions and idioms, and the integrated scheduler will thus be able to exploit the captured domain knowledge for parallelizing analyses much more effectively than standard threading models can. We believe that the work we propose fits well with Cisco's mission to provide flexible high-performance network technology. If successful, our results will not be limited to the security domain but in addition will enable sophisticated, stateful network processing on modern multi-core platforms in general.

Nicholas Race, Lancaster University
Towards Socially Aware Content Centric Networking

Social networks are having an increasing role in the presentation and personalisation of content viewed across the Internet, with users exposed to a range of content items coupled with information from their social networks. In these scenarios there is currently a limited understanding of the emerging usage patterns and context of use and thus, the potential impact this may have on the delivery platform and underlying network infrastructure.

In this project we propose to examine these emerging usage patterns in detail, focusing on the role of the human network and the implications this has for content centric networking.  In broad terms, we seek to develop advanced models of social interaction through long term measurements to develop new mechanisms to inform, create and sustain intelligent content delivery networks.  Initially, this work will focus on defining the relevant metrics that can be leveraged as 'triggers' for informing content networks.  Through knowledge extracted from social graphs and social interactions between users (i.e. the human network) we aim to develop models which can be used to determine or predict likely interest in content items with the intention of improving overall delivery performance.  This research proposition builds upon our previous and ongoing research in this area which has exploited user generated bookmarks to support pre-fetching within CDNs and examined the role of the social graph in informing media distribution systems.

Naofal Al-Dhahir, University of Texas at Dallas
Advanced Interference Mitigation Algorithms for Receivers with Limited Spatial Dimensions

Broadband transmission in shared bands is severely limited by narrow-band interference (NBI). Examples include NBI from Bluetooth, Zigbee, microwave ovens, and cordless phones onto wireless local area networks (WLAN) and AM or amateur radio frequency interference onto digital subscriber line (DSL) modems. Classical NBI mitigation techniques either assume knowledge of the NBI statistics, power level, or exact frequency support which is typically not available in practice or require multiple receive antennas which increase implementation cost. We propose a novel signal processing approach, based on the recent compressive-sensing theory, to estimate and mitigate NBI without making any assumptions on the NBI signal except its narrow-band nature; i.e. sparsity in the frequency domain. In our proposed approach, a novel filter design is proposed to project the received signal vector, which is corrupted by noise and NBI, onto a subspace orthogonal to the information signal subspace. The resulting signal vector will be related to the unknown (but sparse) NBI vector through an under-determined linear system of equations. Then, the sparsity of the NBI vector in the frequency domain is exploited to estimate it uniquely and efficiently by solving a convex optimization program.

Our approach   (1) is computationally efficient, (2) is guaranteed to converge to the globally optimum solution,  (3) works with a single receive antenna while being compatible with multiple transmit and/or receive antennas, (4) can mitigate multiple NBI sources simultaneously, (5)  is agnostic to the NBI type and signaling format, (6) makes on prior assumptions on the NBI statistics, power level, or frequency support.

Subhash Suri, University of California, Santa Barbara
Fast Traffic Measurement at all Time Scales

Today's traffic measurement tools for fine-grain time scales are woefully inadequate; yet the importance of  fine-grain measurements cannot be overstated. First, it is  well-known that data traffic looks very different at  different time scales: measurements at 1 second granularity can completely miss the severe burstiness of  traffic observable at 10 microsecond intervals. Second, the  negative synergy of Gigabit link speeds and small switch  buffers has led to the so-called microburst phenomena,  which can cause packet drops resulting in large increases  in latency.  Finally, it is difficult to predict in advance  which time scale is interesting: is it 10 usec, 100 usec, or  1 msec.  Naively running parallel algorithms for each  possible time scale, such as measuring the maximum  bandwidth over any period of length $T$, is prohibitively expensive.

Our goal in this proposed research is two-fold: to develop  an efficient and flexible tool for fine-grained traffic  measurements at all time scales, and build an algorithmic  foundation for a measurement-based understanding of fine-scale traffic behavior.  The existing solutions take the  extreme form of full traffic flow information at one end  (e.g. NetFlow) and coarser grain summarization techniques at the other end (e.g. packet counters, heavy- hitters and flow counts). We hope that our tools will  provide an intermediate solution, with a measurement  accuracy comparable to NetFlow but at an overhead cost comparable to the coarser-grain methods. Our  techniques have the flexibility to answer queries after the  fact without the overhead of full traffic flow information.   We will investigate software and hardware implementations.

Dmitri Krioukov, The Cooperative Association for Internet Data Analysis - UCSD
Scalable Interdomain Routing in Hyperbolic Spaces

Rapid growth of the Internet causes concerns among Internet experts about scalability and sustainability of existing Internet interdomain routing. Growing FIB sizes are alarming, as we are approaching fundamental physical limits of router hardware, e.g., related to heat dissipation. Worse yet, it has been proven that the communication overhead of any routing protocol cannot grow slower than polynomially with the size of Internet-like topologies. Our previous URP proposal ``Convergence-Free Routing over Hidden Metric Spaces as a Form of Routing without Updates and Topology Awareness'' (funded in 2007) laid the foundations for an interdomain routing solution with scalability characteristics which are either equal or close to the theoretically best possible. Here we propose to take the next step to determine the applicability of the obtained results to the Internet: develop a new addressing scheme based on the hyperbolic embedding of the Internet AS topology, and evaluate the performance of topology-oblivious routing over this addressing scheme. In this approach, each AS is assigned coordinates in an underlying hyperbolic space. To forward packets, interdomain routers will store the coordinates of neighbor ASes. Given the destination AS coordinate in the packet, they would perform greedy geometric forwarding: select as the next-hop AS, the AS neighbor closest to the destination AS in the hyperbolic space. If the proposed strategy is successful, it will deliver an optimal routing solution for interdomain routing, i.e., a solution with the best possible scaling characteristics, thus resolving serious scaling limitations that the Internet faces today.

Sanjay Rao, Purdue University
Abstracting and simplifying routing designs of operational enterprise networks

Enterprise routing designs are often complex reflecting the plethora of operator objectives that must be met such as administrative policies, and requirements on stability, isolation, and performance. The objectives are often achieved by partitioning the network into multiple routing domains, and using a variety of low-level configuration primitives such as static routes, route redistribution and tunnels. Use of such low-level primitives leads to significant configuration complexity, and is consequently error-prone. The goal of this project is to characterize operational enterprise routing designs with a view to understanding common design patterns and usages. Metrics will be developed to both characterize the configuration complexity of routing designs, and operator performance objectives. Such understanding and metrics will guide and inform the creation of alternate routing designs that meet operator correctness and performance objectives but with lower configuration complexity. Our efforts will be enabled through a tool that we are developing which can reverse engineer the routing design of an enterprise network from low-level router configuration files. The tool will capture aspects such as the number of routing instances in the network, protocols and routers in each instance, and logic that connects instances together.

Dawn Song, University of California, Berkeley 
Sprawl: Real-Time Spam URL Filtering as a Service

With the proliferation of user-generated content and web services such as Facebook, Twitter and Yelp, spammers and attackers have even more channels to expose users to malicious URL links, leading them to spam or malicious websites as they use the web services. Thus it is important to provide URL filtering as a real-time service to protect users from malicious URL links. In this project, we propose to design and develop  a real-time system to filter spam URLs from millions of URLs per day before they appear on web services and users' computers. To accomplish this, we propose to develop a novel web crawler and spam URL classifier that visits URLs submitted by web services and makes an immediate determination of whether the URL should be blocked. By limiting classification to features solely derived from crawling the URL, our system is independent of the context where a URL appears, allowing our system to generalize to any environment and work in conjunction with existing security protections.

Andreas Savakis, Rochester Institute of Technology
Video Event Recognition using Sparse Representations and Graphical Models

The objective of the proposed research is to design a system that can reliably identify semantic events in video with minimum supervision. We employ a novel framework inspired by the emerging field of Compressive Sensing and probabilistic graphical models. Compressive Sensing (CS), or Sparse Representations (SR), facilitates dealing with large amounts of high dimensional data by representing the signal of interest by a small number of samples. Our approach incorporates the construction of a dictionary containing semantically distinguishable features to enable generalization due to appearance variability in a computationally efficient manner. Using the learned dictionary, we form a histogram of features based on bags of visual words, where each bin corresponds to a specific primitive action dictionary element. When a new video sequence is presented to the system, we identify the corresponding dictionary elements by utilizing the Sparse Representations (SR) framework. Probabilistic graphical models, such as conditional random fields, are used to model correlations between primitive actions and identify complex events. Our approach has great potential for applications ranging from video search and similarity across videos to real time activity and event recognition for surveillance, human computer interaction, and smart environments.

Steven Low, Caltech
Architectures, Algorithms and Models for Smart Connected Societies

The power network will undergo in the next few decades the same architectural transformation that the telephone network has recently gone through to become more intelligent, more open, more diverse, more autonomous, and with much greater user participation. This architectural transformation will create an Internet equivalent of the power network, with equally far reaching and lasting impacts.  Our goal is to develop some of the theories and algorithms that will help understand and guide this transformation, leveraging on what we learnt from the Internet and adapting it to the unique characteristics of power networks.

Grenville Armitage, Swinburne University of Technology
Detecting and reducing BGP update noise

In previous work we implemented and evaluated an approach called Path Exploration (PE) Damping (PED) -- a router-level mechanism for reducing the volume of propagation of likely transient update messages within a BGP network while decreasing average time to restore reachability compared to current BGP Update damping practices. We are now interested in more complex BGP update sequences that are still essentially "noise" - sequences that force update processing at one or more downstream peers, yet after the sequence ends there no meaningful alteration to BGP topology. We propose to extend our previous work in three ways -- develop and evaluate techniques for auto-tuning of path exploration damping intervals, detecting and suppressing more complex "noisy" BGP update sequences, and improving our "Quagga Accelerator" toolkit.

Hussein T. Mouftah, University of Ottawa
An Efficient Aggregate Multiparty Signature Scheme for S-BGP AS-PATH Validation

This research study will involve the evaluation of secure multiparty digital signature schemes. Our objective is to design/select a light and secure scheme that enables a receiver to verify the authenticity of an update signed sequentially by multiple ASes as well as the order in which they signed the update. We will evaluate the performance of the schemes in the context of SBGP based on the assumption that some major nodes in the network are considered trustworthy by the rest of the nodes in the network.

Rama Chellappa, University of Maryland
Methods for Video Summarization and Characterization

Methods for Video Summarization and Characterization

The tremendous increase in video content is driven by the inexpensive video cameras and the growth of the Internet including the popularity of several social networking and video-sharing websites. This explosive growth in the video content has made the development of efficient indexing and retrieval algorithms for video-data essential. Most existing techniques search for a specific video based on textual tags of the videos, which are often subjective, ambiguous and do not scale with the increasing size of the dataset. We propose to develop novel methods for two problems that will enable efficient video indexing and retrieval techniques.

  1. Creating abstracts of video
  2. Extracting meta-data for videos

Abstracts of videos are created by using an optimization technique that ensures complete coverage of the video as well as guarantees that the retained summaries are diverse. The method for extracting meta-data depends on video classification using chaotic invariants.

Paolo Dini, Centre Tecnologic de Telecomunicacions de Catalunya (CTTC)
Improving User Experienced Quality using Multiple Wireless Interfaces Simultaneously and Network Coding

The improvement in the per-user capacity of next generation wireless access technologies is continuously trailing the ever increasing bandwidth requirements of mobile IP services. For this reason, complementary solutions are being investigated by the research community to fulfill the same requirements using existing wireless technologies. With this respect, a promising solution is to use multiple access technologies simultaneously in order to get an aggregated bandwidth that is sufficient to provide a satisfactory Quality of Experience (QoE) to the user.

State-of-the-art solutions for the simultaneous use of multiple access links suffer from the delay dispersion problem, due to the heavy tail characteristic of the delay in today's Internet. In order for this delay not to result in a packet loss due to late arrival at the re-ordering buffer, a high reordering delay is needed. To summarize, existing solutions are effective in bonding multiple access technologies together to increase throughput, but in doing so they necessarily cause an increase in the end-to-end application layer delay, which is inacceptable for many applications.

The use of Network Coding allows to effectively trade-off throughput and delay, and to cope with eventual packet losses. In fact, it adds redundancy packets: if some packets of the original flow are late at the re-ordering entity due to heavy-tailed delays, they can be recovered by combining the data and redundancy packets which have arrived in time.

In applying the NC paradigm to the scenario that we consider in this proposal, we have to consider the following problems:

  1. which coding strategy to choose for a given set of access link characteristics (delay, capacity) and service requirements.
  2. how to dynamically adapt the coding strategy to the varying characteristics of wireless links.
  3. how to implement the solution transparently to the application.

In addition, a specific goal of our research proposal is to test our NC-based solution experimentally in a proof-of-concept testbed using both WLAN and 3G access technologies. EXTREME testbed, used in grant 2008-07109, will be extended to prove the solution proposed. In particular, the encoding and decoding operations will be performed by software modules to be installed in the mobile node and in the bonding server placed between the mobile and its corresponding node.

Ronald (Shawn) Blanton, Carnegie Mellon University
Identifying Manufacturing Tests for Reventing Escape to the Board Level

Our objective is to develop a methodology for analyzing system-level test data to derive improved manufacturing tests for preventing IC-level escape to the system/board-level of integration

Mike Wirthlin, Brigham Young University
Timing Aware Partial TMR and Formal Analysis of TMR

This work will investigate methods for applying partial TMR within FPGA designs in a way that considers the timing of the FPGA design. Conventional methods for applying TMR negatively affect timing and may not meeting timing constraints. New partial TMR selection and application techniques will be investigated and developed. In addition, this project will investigate the ability to verify partial TMR netlists by using commercially available formal verification techniques.

Yun Liu, Beijing Jiaotong University
Research on Consensus Formation on the Internet

Internet has brought a lot of new features in the process of consensus formation, such as the huge amount of participants, the large volume of sample data, the fast speed of diffusion, the acentric distribution, etc. Web users bear various complex features, such as extremism, obstinateness, bounded rationality, etc. These features cause a lot of problem in adopting traditional methods and models in the study of consensus formation on the Internet. The research concentrates on how some unique features of the Internet influence the process of consensus formation on the Internet. It adopts methods from interdisciplinary fields, studying the diffusion process of Internet information, the mapping relation between individual preference and opinion choice, the interaction of individual preference and its action process, the influence of extreme agents and hub agents on consensus formation on the Internet.

Xiaohong Guan, Xi'an Jiaotong University
Abnormal Data Detection and Control Actuation Authentication for Smart Grid

Smart grid technology provides a desirable infrastructure for energy efficient consumption and transmission, by integrating traditional electrical power grids with an information network and may incorporate a large number of renewable resources, new storages and dynamic user demands. Since state measurement, pricing information and control actions are transmitted via the information network, abnormal sensing data and false control command such as hacking, attacks and fraud information commonly encountered in Internet would impose great threat on secure and stable power grid operation. A new method is proposed to detect abnormal sensing data in a smart grid based on information fusion of data consistency check and detection of abnormal network traffics. A new dynamic authentication mechanism is proposed for control commands issued from control center to an RTU by taking into account the structural characteristics of power grid. A new model is proposed to assure run-time trustworthiness of RTU and detect the abnormal RTU without applying embedded trusted platform module.

Qiang Xu
The Chinese University of Hong Kong
Physical Defect Test for NTF with Minimized Test Efforts – Phase II, Discover NTF Devices

No Trouble Found (NTF) devices refer to those devices that fail in system-level test but refuse to fail with a retest on automatic test equipment (ATE) by the IC providers. To solve the NTF problem, we need to first determine those malfunctioning chips on a board that fail system test. Unlike prior solutions that try to build a direct relationship between the test syndromes and the failed ICs (with unsatisfactory diagnosis accuracy), in this project, we plan to add a "bridge" to link these two items to achieve a more fine-grained diagnosis solution, considering the functional behavior of each IC, the system structure, and the boards' repair history. The research outputs of this project would include not only publications in prestigious journals and conferences, but also a novel diagnosis tool that ranks potential NTF devices and suggests new tests for enhanced diagnosis accuracy.

Steven Swanson, University of California, San Diego
Characterizing Solid-State Storage Devices for Enterprise Applications

Flash-based storage is poised to revolutionize how computer store and access non-volatile data. Raw flash chips and solid-state drives (SSDs) offer the promise of orders of magnitude reduction in latency and commensurate gains in bandwidth. Many segments of the computing industry are counting on SSDs to enable a range of exciting applications ranging from intelligent mobile devices to high-end networking equipment to "cloud" storage systems to mission-critical high-performance storage applications.

We propose to evaluate discrete flash devices and flash-based solid-state drives to understand their performance, efficiency, and reliability characteristics.  The data that these studies provide will place systems designers on a firm footing with respect to behavior of the flash-based components they wish to include in products. As a result, designers will be able to employ these devices in situations that minimize their shortcoming and allow their strengths to shine through. The result will be systems that are faster and more efficient, but that can also meet the strenuous reliability requirement of the most demanding applications.

Saleem Bhatti, University of St Andrews / Computer Science
IP Scalability Through Naming (IP-STN)

Growing concerns about the scalability of addressing and routing has prompted the research community to consider the long-discussed idea of the Identifier / Locator (I/L) split as a solution space. As documented by the IRTF Routing Research Group (RRG), some proposals for an I/L approach to networking are focussed towards ``in-network'' implementation models, such as LISP, while some are focussed on end-system implementation models, such as the Identifier Locator Network Protocol (ILNP). As the complexity of the current in-network engineering landscape grows, introducing further complexity within the network may not be scaleable long term, and may destabilise existing, mature technology deployments (such as MPLS). This proposal is to build c-ILNPv6, a prototype implementation of the core functionality of ILNP based on IPv6. ILNPv6 has almost no impact on existing core network engineering as it is an end-system implementation re-using the existing IPv6 packet format and address space. Use of ILNPv6 would allow multi-homing to be implemented without any additional routing table entries in the DFZ. While moving complexity away from the core, ILNPv6 also offers the opportunity for new developments in adjacent areas such as mobility.

Kamesh Munagala, Duke University
Richer Models and Mechanisms for Internet Auctions

Auction theory has a rich and growing body of work both in economics, and more recently in computer science. The rise of modern applications stemming from the internet (mainly through search engines, social networks, and auction sites) has been the main driving force behind the development of richer models and mechanisms for auctions. Though internet auctions have largely derived their theory and implementations from traditional auctions, there are several features, particularly externalities, budgets, and presence of side-information, that distinguish these auctions, and which call for a paradigm shift in their design. The PI seeks to develop new models and algorithms to study these allocation problems.

Florian Metze, Carnegie Mellon University
Ad-Hoc Microphone Arrays

Recognition of speech recorded through distant microphones (i.e. table-top microphones in video conferences, hands-free units for mobile devices) continues to be one of the most challenging tasks in speech processing, with word error rates around 30% even in otherwise ideal conditions. This is mainly due to a mismatch of acoustic models, which were trained on single-channel, close-talking data, which is not encountered during testing. Single channel signal enhancement improves the perceptual quality of speech, but does not improve recognition. Microphone arrays can be used in some situations, but they require careful placement and speaker localization and tracking to function well, which is often impractical. We propose a training scheme for acoustic models using multiple arbitrarily placed microphones ("ad-hoc microphone array") in parallel, instead of just one. By training and adapting the acoustic model on multiple slightly different inputs, instead of just one, it will generalize well to microphone conditions during testing. Recent advances in discriminative training of acoustic models allow joint optimization of speaker dependent acoustic models in multiple discriminatively trained feature spaces, derived from multiple microphones. This allows training acoustic models independently of spatial arrangements in the room. Preliminary results using non-discriminative transformations confirm the effectiveness of this approach, and suggest discriminatively trained models should be sufficient for indexing of end-user videos recorded with hand-help devices, transcription, summarization, or translation of content in tele-presence systems, or other applications.

Richard Stern, Carnegie Mellon University
Robust Speech Recognition in Reverberant Environments

Despite decades of promise, the use of speech recognition, speaker identification, and related speech technologies in practical applications remains extremely limited.  This disappointing outcome is a consequence of a lack of robustness in the technology, as it is observed that the performance of speech-based systems in natural environments, and especially those in which the microphones are distant from the speaker, is far worse than the performance that is attainable with a head-mounted microphone. Any type of distant-talking speech recognition system, as would be used in systems that incorporate voice input/output in applications such as videoconferencing or the transcription, diarization, and analysis of meetings, must cope with the effects of both additive noise and the reverberation that is present in natural indoor acoustical environments.  While substantial progress has been made in the treatment of additive noise over the past two decades, the problem of reverberation has been quite intractable until recently, and the accuracy of speech recognition systems and related technologies, degrades sharply when these systems are deployed in natural rooms.

The goal of the proposed research is to develop paradigm-shifting signal processing and analysis techniques that would provide sharply improved speech recognition accuracy in reverberant environments.  Toward this end we will extend and combine on two very complementary technologies that have shown great promise (and some success) in ameliorating the difficult problem of machine speech perception in reverberation.  The first approach is motivated by the well-documented ability of the human auditory system to separate sound sources and understand speech in highly reverberant environments.  Toward this end we have developed feature extraction procedures that exploit the relevant processing by the auditory system to provide speech representations that maintain greater invariance in the presence of signal distortion than is observed with conventional feature extraction.   The second approach is based on blind statistical estimation of the signal distortion introduced by the filtering effects of room acoustics.  Solutions are developed at multiple levels of processing that are based on parameter estimation, and that exploit constraints that imposed by all reverberant environments and that are imposed by the nature of speech itself.  

The proposed work consists of a combination of fundamental research in robust signal processing for reverberant environments and application development on tasks of relevance to Cisco.  The fundamental research will include further development of the auditory-based approaches, and research in the combination of approaches based on auditory modeling and statistical modeling.

We are also committed to developing our systems in a context that is relevant to the mission of the Speech and Language Technologies (SaLT) section of the Emerging Technologies Group.   We intend to deveop and benchmark our algorithms using data that are relevant to the group, to the extent possible.   In all cases we will transfer working technology to the SaLT group, including computationally-efficient realizations that can be run in online fashion for real-time applications as well as more complex implementations that will provide superior performance for non-real-time speech analysis, with complementary solutions developed for single microphones, pairs of microphones, and microphone arrays.  The success of our efforts will be measured by the extent to which our approaches provide benefit compared to the conventional MFCC/PLP/AFE front ends in current use by the industry.  

Prashant Shenoy, University of Massachusetts Amherst
Architectures, Algorithms and Models for Smart Connected Societies

Today's smart electric grids provide support for various technologies, including net metering, demand response, distributed generation, and microgrids. As these meters become more sophisticated, they are able to measure household power consumption at ever finer time-scales. Such fine-scale measurements have serious privacy implications since they inadvertently leak detailed information about household activities.   Specifically we show that even without any prior training, it is possible to perform simple analytics on the data to use patterns of power events to extract important information about human activities in a household. To mitigate these problems, we will  focus on design and implement a privacy-enhancing smart meter architecture that enables electric utilities to achieve their net metering goals, while respecting the privacy of their consumers. Our approach is based the notion of Zero-Knowledge proofs and provides cryptographic guarantees for the integrity, authenticity, and correctness of payments, while allowing variable pricing, demand response, etc., all without revealing the detailed power measurements gathered by smart meters. We will implement our approach on a testbed of 50 prototype TED/linux sensors that mimic smart meters  and experimentally demonstrate the efficacy of our approach.

Balaji Prabhakar, Board of Trustees of Leland Stanford Junior University
Stanford Experimental Data Center Lab

The Stanford Experimental Data Center Lab has a broad research agenda, spanning networking, high speed storage and energy efficiency. In networking, a central project is to design and build a prototype of a large (hundreds of ports), low-latency, high bandwidth interconnection fabric. Other major projects include rapid and reliable data delivery, and software-defined networking. Storage is viewed as existing largely, if not entirely, in DRAM servers; this is the focus of the RAMCloud project. The overall goal is to combine the RAMCloud project and the interconnection fabric project so that applications which access data can do so with a latency less than 2 to 3 microseconds.

Patrick E. Mantey, University of California, Santa Cruz
Smart Meters and Data Networks can Improve Operation and Reliability of Electric Power Distribution

Smart meters are now being widely deployed, with their primary use being to replace the human reading of meters at consumer sites.  Future use of these smart meters is also to provide some usable consumption data to consumers to help them manage their electrical use when prices vary depending on the time of data (TOU: time of use metering) and on conditions in the power grid (CPP: critical peak pricing).  The infrastructure for smart meters results in an (IP mesh) data network that overlays the power distribution subsystem. 

The focus of this work is to examine how these smart meters, and their data network, augmented by making other components of the power distribution subsystem "intelligent" (i.e. gathering data and communicating that data), can improve the quality and reliability of power distribution.  Smart meters today measure in real-time the variables associated with the energy delivered to each consumer (e.g. voltage level, harmonics, real power, reactive power, total energy).  The smart meter mesh network delivers selected data from the smart meters to the utility.  Today that is primarily used to replace meter reading, bringing the total accumulated consumed energy amount to the utility for billing.  But using this network and exploiting more fully the measurements available from smart meters, combined with data from other distribution system components (e.g. transformers, relays, etc.) when connected to this data network, and used with data now available at the distribution substations, could help in discovering pro-actively when components are failing, and improve the response to service interruptions.

From the proposed research, we expect to identify the data and the processing needed to make this (more fully "intelligent") distribution grid more reliable.  This will involve using a combination of analysis and modeling / simulation, and should lead to an economic rationale for adding additional "smart" components to power distribution subsystems.    Our campus electrical distribution system will be part of our laboratory for this work, providing real data and a real-world context for the problems addressed via analysis and simulation in this research.

The work proposed will strengthen the efforts of the Baskin School of Engineering at UC Santa Cruz to educate engineers relevant to "smart grid", with expertise in power systems and data networks. 

Costin Raiciu, Universitatea Politehnica Bucuresti
Evolving Networks with Multipath TCP

Multipath TCP is an idea whose time has come. The IETF is standardizing changes to the TCP protocol such that it can use multiple paths for each connection. The benefits are increased robustness, network efficiency, and avoiding congested paths, providing a better end-user experience. MP-TCP is applicable not only to the Internet, but also in data centers, where it can seamlessly utilize the many alternative paths available between nodes, increasing utilization and fairness.

Our previous research has focussed on making the MP-TCP protocol deployable, and on congestion control to make it efficient. This congestion control has now been adopted by SCTP.  In this proposal we take the next step, asking how networks might evolve to better support MP-TCP.

Our focus is on deployable changes to the network that would enhance the overall performance. We will explore changes to multipath routing in our quest to explore the interactions between Internet traffic engineering and MP-TCP. We will also explore changes to existing data center topologies and routing protocols that play to MP-TCP's advantages.

This proposal is linked to and has the same content as the proposal from Prof. Mark Handley, UCL.

Mark Handley, University College London
Evolving Networks with Multipath TCP

Multipath TCP is an idea whose time has come. The IETF is standardizing changes to the TCP protocol such that it can use multiple paths for each connection. The benefits are increased robustness, network efficiency, and avoiding congested paths, providing a better end-user experience. MP-TCP is applicable not only to the Internet, but also in data centers, where it can seamlessly utilize the many alternative paths available between nodes, increasing utilization and fairness.

Our previous research has focussed on making the MP-TCP protocol deployable, and on congestion control to make it efficient. This congestion control has now been adopted by SCTP. In this proposal we take the next step, asking how networks might evolve to better support MP-TCP.

Our focus is on deployable changes to the network that would enhance the overall performance. We will explore changes to multipath routing in our quest to explore the interactions between Internet traffic engineering and MP-TCP. We will also explore changes to existing data center topologies and routing protocols that play to MP-TCP's advantages.

This proposal is linked to and has the same content as the proposal from Dr Costin Raiciu, PUB.

Albert Cabellos, Technical University of Catalonia (UPC) - Barcelona Tech
How Can LISP Improve Inter-domain Live Streaming?

In the current Internet typically live video streaming services operate at the application layer (e.g, P2P Live) and not at the network layer. The main reason is that multicast is not widely available at the core-routers because of its high deployment and operational costs. In this project we aim to develop a new multicast protocol taking advantage of LISP, therefore avoiding deployment at these core-routers. 

Mohammad Tehranipoor,
University of Connecticut
Analysis and Measurement of Aging Effects on Circuit Performance in Nanometer Technology Designs

Current static and statistical static timing analysis (STA and SSTA) tools take into account the process and parasitic parameters to calculate circuit timing and perform timing closure. However, performance of a chip in the field depends on many other parameters namely workload, temperature, voltage noise, and lifetime duration. It is vitally important to take these into account during pre-tapeout timing analysis. Among all the parameters, workload presents itself as a major issue since in many cases during chip design and fabrication the workload is unknown to the designers and test engineers. In this project, we develop methodologies for performance optimization with and without workload to ensure that circuit meet timing budget in the field. In addition, we will generate representative critical reliability (RCR) paths to be monitored in the field. These paths represent the timing of the circuit and age at the same rate as the most critical reliability paths in the circuit. Our previously developed sensors will be utilized to measure the RCR paths in the field accurately. This project also helps perform better performance verification and Fmax analysis.

Ljudevit Bauer, Carnegie Mellon University
Usable and Secure Home Storage

Digital content is becoming common in the home, as new content is created in digital form and people digitize existing content (e.g., photographs and personal records).  The transition to digital homes is exciting, but brings many challenges.  Perhaps the biggest challenge is dealing with access control.  Users want to selectively share their content and access it easily from any of their devices, including shared devices (e.g., the family DVR).  Unfortunately, studies repeatedly show that computer users have trouble specifying access-control policies.  Worse, we are now injecting the need to do so into an environment with users who are much less technically experienced and notoriously impatient with complex interfaces.  We propose to explore an architecture, mechanisms, and interfaces for making access control usable by laypeople faced with increasing reliance on data created, stored, and accessed via home and personal consumer electronics.  The technologies that we plan to study and build on include logic-based access control, expandable grids interfaces, semantic policy specification, and reactive policy creation.

Aditya Akella, The University of Wisconsin–Madison 
Measurement and Analysis of Data Center Network Architectures

Data centers are playing a crucial role in a variety of Internet transactions. To better support current and future usage scenarios, many research efforts have proposed re-architecting key aspects of data centers, including servers, topology, switching, routing and transport. Clearly, the performance of these proposals depends on properties of data center application workloads and their interaction with various aspects of the data center infrastructure (e.g., physical host capabilities, nature of virtualization support, multipath, buffering etc.). Unfortunately, however, most existing designs are evaluated using toy application models that largely ignore the impact of the above factors on the proposals' effectiveness.

The goal of our work is to create representative large-scale application traffic models for data centers based on measurements of data centers in the wild, and to employ the models to accurately compare the trade-offs of existing design proposals. To date, our group has collected and analyzed packet traces, SNMP logs, and topology data from 5 private data centers and 5 cloud data centers to examine a variety of DC network traffic characteristics, such as applications deployed, switch-level packet transmission behavior, flow-level properties, link utilizations, and the persistence and prevalence of hotspots. The proposed research will expand this work in three broad ways: (1) we will expand our empirical observations through additional fine-grained data collection at hosts, switches and links in data centers, as well as additional empirical results on transport and application properties; (2) we will create rich application models capturing dependencies across tasks and identifying key invariant properties of application network and I/O patterns; and, (3) we will use the measurements/models to systematically evaluate DC design proposals while holistically considering host, switch, application and DC-wide properties.

Leonard J. Cimini, University of Delaware
Optimizing Resources for Multiuser-MIMO in IEEE 802.11ac Networks Using Beamforming

The proposed work is a continuation of a successful project in understanding fundamental issues in MIMO beamforming (BF) for IEEE 802.11 networks.  The enhanced capabilities promised by 802.11ac come with many challenges, in particular, the task of effectively managing system resources. Fundamental questions include:  How to most effectively use the available DoF?  Which users to serve, taking into account fairness and aggregate throughput?  And, how many streams to provide to each user?  In the coming year, we propose to continue our research on MU-MIMO BF.  Important tasks that we would like to tackle include: develop non-iterative algorithms that achieve a balance between interference and desired signal levels for any number of users and available DoF; determine guidelines for when to use MU-MIMO rather than provide multiple streams to only one user, what information is required at the AP to make this decision, and how to perform mode switching; quantify the sensitivity of MU-MIMO to imperfect CSI and develop robust BF algorithms; and evaluate a variety of scheduling/signaling strategies in MU-MIMO environments.

Jacob A. Abraham, The University of Texas at Austin
Mapping Component-Level Models to the System and Application Levels

Complexity is the primary stumbling block to the effective analysis of computer systems.  Thus, analyzing complex systems, including Systems-on-Chip (SoCs), for verification, test generation or power consumption is exacerbated by the increasing complexity of integrated circuits.  Most commercial tools can only deal with relatively small components in the design.  The proposed research is directed towards developing powerful algorithms for mapping component-level models to the system level.  We have developed mapping algorithms for conventional instruction-set processors.  We will extend the analysis techniques to general systems, including application-specific processors, which have not been the target of research.  The research will focus on the following problems.

1. Extending the research in mapping techniques for conventional processors to general systems, including application-specific processors such as packet processors.
2. Developing new techniques for accurate estimation of the fault coverage of iinput sequences at the system level.
3. Developing new abstraction techniques to reduce the complexity of the mapping process.

Michael Nicolaidis, Institut Polytechnique de Grenoble / TIMA
Techniques for Soft Error (SEU/SET) Detection and Mitigation in ASICs

Today, large ASICs can have on the order of ten million flip-flops and it is challenging to meet overall system reliability targets without taking active steps to manage the effect of Single Event Upsets (SEUs) and Single Event Transients (SETs). Even though the effect of the vast majority of soft error events is benign, a small number of upsets do result in a high-severity system impact. The system-level effects of soft-errors vary with the application. In many systems, if the events are detected and the system recovers quickly without impacting the end-user experience, then they can be tolerated at a relatively high frequency. Thus the main focus of this work is on the efficient detection of flip-flop and logic soft-errors. Many innovative techniques exist for circuit level detection of soft errors. Some of these are dependent on specific clocking methodologies while others are applicable to designs following an ASIC design flow (D-type flip flop design paradigm). This work will explore various approaches to SEU and SET detection while striving to minimize the area, timing and power penalties, including design sensitivity analysis, timing slack analysis, automated parity generation, automated double-sampling insertion, and selective substitution with hardened flip-flops.

Apinun Tunpan, Asian Institute of Technology
DTNC-CAST: A DTN-NC overlay service for many-to-many bulk file dissemination among OLSR-driven mobile ad hoc network partitions for disaster emergency responses

The problem of creating emergency communication networks, especially in the situations where a large-scale disaster (e.g. a Tsunami) wipes out traditional telecommunication infrastructure, still remains a great challenge.  In the first few hours after the large-scale disaster, rescuers may deploy Mobile Ad hoc Networks (MANET) to setup make-shift communication [1]. There are major concerns in deploying such post-disaster MANETs.  One, running MANET in unfavorable post-disaster environments is prone to having very disruptive connectivity, resulting in multiple MANET partitions which occasionally merge and split. Two, there are needs to conserve the power and to make efficient use of limited communication bandwidth which is likely shared among many rescuers and responsible agencies who have to disseminate bulk files, in a many-source to many-destination manner. Three, the disaster emergency MANET should be designed to operate in a manner that suits the information flows of the on-going rescue operations.

To deal with disruptive networks, Disruption/Delay Tolerant Networking (DTN) [2] can be deployed. During the course of bulk file transfer, we expect the network to be highly disruptive because of intermittent connectivity among MANET nodes. Hence, DTN would be more suitable in such emergency situations rather than traditional MANET networks where a slight disruption between two peer-to-peer nodes would have caused failure of the whole bulk file transfer. Our past MANET field tests have revealed that voice over IP and voice/video streaming become very much useless when MANET connectivity becomes more disruptive.

Network Coding (NC) [4] is an important paradigm shift in that it allows intermediate router nodes to algebraically encode (i.e. mix) and decode (i.e. de-mix) data for multiple on-going sessions.  NC can be potentially used to boost bandwidth usage efficiency, to increase communication robustness, and to design new ranges of networked applications. The success of NC depends very much on the topology of the network and the choices of network codes.

The main challenges we propose to solve in this proposed research are i) to incorporate both DTN and NC into a real-world testbed for the post-disaster emergency MANET communication scenarios, and ii) to utilize the topology information available from the underlying proactive OLSR MANET routing protocol to make decisions for DTN routing and to select network codes.  We name our proposed system DTNC-CAST.  DTNC-CAST is intended for DTN bulk file dissemination from multiple sources to multiple receivers among intermittently connected OLSR MANET partitions. We propose to realize DTNC-CAST using the DTN2 reference Implementation (DTNRI). Lastly we plan to field test DTNC-CAST in scenarios corresponding to post-disaster rescue operations

Jun Bi, Tsinghua University
Study on Identifier-Locator Network Protocol (ILNP)

ILNP (Identifier-Locator Network Protocol) has become one of the most promising approaches for routing architecture in future Internet. The IRTF RRG recently recommended in RFC6115 that ILNP be pursued within the IETF. However, there are still a lot of technical questions on ILNP which nobody can clearly answer, due to lack of experiments of ILNP.

The goals of this project will be developing an ILNP prototype and studying ILNP based on the it, including: (i) developing an implementation of ILNP based on Linux according to the basic ILNP specification; (ii) study the possible problems caused by mobility, especially in the scenario that both hosts are moving; (iii) research into the ID/Locator mapping system, including the improvement of DNS and DNSSEC system for more efficient and secure ID/Locator mapping; (iv) working together with Cisco to improve ILNP based on the study results; (v) working together with Cisco to submit related RFC drafts describing the implementation, the problems encountered, and the solutions explored.

Lynford Goddard, University of Illinois at Urbana-Champaign
The CISCO UCS for Engineering Design: Applications in Electromagnetic and Photonic Simulations

Introduction: The Finite Element Method (FEM) is a general computational technique for numerically solving partial differential equations. FEM is widely used in application simulation in nearly every field of engineering [1-3], as well as in physics, chemistry, geology, environmental sciences, and even economics and finance. The power, versatility, and accuracy of the technique are evidenced by its successful applications to many practical problems in these areas and by the existence of a wide variety of commercial CAD software that employ the technique. FEM has several advantages over other general numerical methods for solving partial differential equations such as the finite difference method, including more accurate discretization of the problem domain, better control of stability and convergence, and a potential for higher overall accuracy. However, in contrast to the finite difference method, parallelization of the FEM algorithm on conventional parallel computing platforms is less straightforward.  Algorithms used to implement FEM require quick communication of massive amounts of intermediate results among the distributed cores during computation. Consequently, in computational electromagnetics and photonics, the use of FEM is primarily restricted to small computational domains, i.e. to model structures and devices that are on the order of the electromagnetic wavelength or smaller. A suitable parallel platform would therefore advance the application of FEM to problems involving structures that are significantly larger than a wavelength. There is much interest from engineers and scientists in using these structures to simulate antennas, microwave systems, airplanes, missiles, fiber optic components, and integrated photonics devices and circuits using FEM. These structures are significantly larger than the wavelength and are difficult to simulate using FEM because of the massive memory requirements and resulting need for an efficient and large bandwidth communication network. Moreover, to perform device design, several simulation runs with different device geometries are required and thus necessitate further computation.

Statement of Work: The PI and his team propose a three year phased approach to apply the unique architecture of CISCO's Unified Computing System to demonstrate that this computing medium is effective for advanced FEMs. In particular, the team will develop open source software based on FEM for simulation and design of electromagnetic and photonic devices and systems. The software will be optimized to run on CISCO's UCS platform by using algorithms that exploit the unique memory features and shared network capabilities of the UCS platform in an efficient way.

The overall motivation is to perform accurate 3D photonic simulations on large computational domains. To minimize project risk, we will take a top down multi-resolution approach and adapt the project based on sequential learning. In year 1, we will focus on porting Charm++ onto UCS and on simulating simple photonic structures using just 1-2 nodes. The goal will be to publish a conference paper on device and/or computer science aspects of the research. In years 2 and 3, we will apply lessons learned and develop fast accurate simulation of complex structures [4-8]. Also, the code will be designed to run efficiently given available resources. Specifically, load balancing and communication optimization will be implemented.

The proposed research plan offers rich opportunities for teaching and training of graduate and undergraduate researchers in engineering design and device simulation, parallel computing, and the UCS platform. In addition, the project will provide a natural recruitment pipeline of B.S., M.S., and Ph.D. graduates with in depth training on and experience with the UCS platform. Research and teaching will be integrated through the development of a new graduate seminar course: "Modeling of Photonic Devices," which will also serve to introduce the platform to students from other research groups. Research and teaching results will be disseminated in journals and conferences to enhance the current understanding of parallel computing architectures and their application to electromagnetic and photonic device modeling.

George Porter, University of California, San Diego
Incorporating Networked Storage within Highly Efficient Data-intensive Computing

To date, the performance of Data-intensive Computing Systems (DISC) has focused primarily on their performance in isolation.  Largely ignored has been the performance implications of migrating data to and from external, networked storage as well as supporting high performance in multi-tenant environments with background traffic.  In this proposal, we argue for taking a more holistic view of DISC computing that includes the I/O necessary to connect these DISC systems with the actual sources and sinks of data, located elsewhere in the enterprise.  The motivation for our approach is to take advantage of the ability to optimize I/O and to extract as much parallelism as is available from the networked storage itself and the network paths interconnecting that storage to the DISC system so that we can improve total system performance.  An unfortunate result of not considering this problem in this way might be a system that is highly tuned and fast by itself, but spends the majority of its time loading and storing data to networked storage.

Thomas Huang, The Board of Trustees of the University of Illinois 
A Mobile and Asymmetrical Telepresence System with Active 3D Visual Sensing

A new paradigm of asymmetrical telepresence is proposed to meet the increasing demand of communication and collaboration between stationary and mobile devices. By leveraging computer vision analysis and synthesis techniques, we perform computation intensive tasks at the resourceful stationary end as much as possible, and communicate the essential visual information with mobile ends as little as possible. The visually immersive experience at each end point is maximized conditioned on local infrastructure constraints. The functionalities we shall develop will include: person identification, emotion recognition, 3D face rendering (avatar) and super-resolution. A critical issue is accurate 3D human pose estimation. A novel algorithm using multiple view and 3D depth information is proposed, which is expected to give more robust and accurate pose estimation than existing methods. We also expect to integrate the proposed asymmetrical paradigm with Cisco's current TelePresence products, and build a prototype system with active visual sensing functions enabled by our existing and proposed vision algorithms.

Alex Bordetsky, Naval Postgraduate School
Space-Based Tactical Networking

Our primary objective is to create a testbed environment to integrate satellite networking platforms with emerging ad-hoc tactical networks using Cisco radio aware routing approach.  Its intended purpose is to support exercises and experimentation in a variety of environments including battlefield telemedicine, maritime interdiction operations (MIO), intelligence, surveillance and reconnaissance (ISR), and casualty evacuation in a way that is transparent to the warfighter.  We will enable other technologies to bring timely, necessary information to decision makers by developing the technologies that provide reliable, scalable networking services to austere environments, regardless of existing infrastructure in a way that requires no additional user tasks or trainingIn order to achieve our objective, our experimentation process will leverage IRIS and emerging radio aware routing integration with ad-hoc tactical networks, mobile sensors, and unmanned systems primarily in MIO, Battlefield Medical, and fire support scenarios including synchronous global reachback to subject matter experts.  The introduction of IRIS and radio aware routing will eliminate the traditional "bent pipe", circuit switched approach to satellite communications and implement a space based packet-switched solution providing the war fighter with net-enabled capabilities.

We propose to explore, model, and simulate a constellation of satellites in orbit with IRIS type routers on board, with an emphasis on Delay Tolerant Networks (DTN) and radio aware routing to establish inter-satellite links.  In an effort to minimize the power/size required for the handsets to reach the satellites, we will focus our efforts on the study Medium Earth Orbit (MEO), Low Earth Orbit (LEO).  Additionally, we would apply the fundamentals of orbital mechanics and unique aspects of the space environment to predict link quality in order to maintain quality of service (QOS) despite dynamic link status.

Konstantinos Psounis, University of Southern California
Efficient airtime allocation in wireless networks

The wireless networks of the future promise to support a number of useful and exciting applications, including home networking, vehicular networks providing increased safety and a plethora of entertainment options, social networking over an ad hoc network of users, environmental monitoring, and, more general, pervasive communication anywhere and anytime. These applications rely on a combination of single-hop wireless technologies like enterprise WiFi, with multi-hop wireless networking technologies like sensor networks, mesh networks, mobile ad hoc networks, and disruption tolerant networks. The largest challenge with these new technologies is how to offer good performance at a reasonable cost despite the increased wireless interference and the highly distributed, decentralized architecture. Specifically, one of the most important challenges is how to allocate the available bandwidth and airtime of these networks in an efficient manner, without requiring to change the decades old network stack which includes protocols like 802.11 and TCP.

In this proposal, building upon our prior work, we plan to design, implement, and test a simple yet near-optimal scheme that allocates bandwidth and airtime in single- and multi-hop wireless networks and is fully transparent to existing protocols. Our scheme will have native support for mobile users and, in collaboration with CISCO engineers, we plan to integrate it into a multi-technology networking platform which allows vehicles to connect to the network anywhere and anytime, using a number of technologies including WiFi, WiMAX, cellular data transfer, ad hoc communication, and disruption tolerant communication. The proposal is directly relevant to a number of current projects running at CISCO, and we will collaborate closely with at least one CISCO team which works on such projects.

Carsten Bormann, Universität Bremen TZI
Evolving the constrained-REST ecosystem

The IETF is creating a set of application layer specifications for constrained devices ("Internet of Things").  While the initial design is nearing completion, a number of desirable next steps have become apparent:

CoAP needs a security architecture that scales down to constrained devices, but still provides the security needed in the global Internet and does not become an impediment to deployment; resource discovery mechanisms are required that maximize the ability of components at the application layer to work with other components without needing application-specific knowledge, enabling management, configuration, and even some operation of new components; interoperation is needed with protocols beyond HTTP, again without becoming too tied up to specific applications, enabling integration e.g. with presence protocols; and significant benefits may be expected from using mobile code as a way to extend functionality of components without needing to upgrade their software each time, enabling more meaningful interoperation.

The proposal seeks to address these issues and opportunities in a way that generates results that feed into IETF specifications, leveraging the existing successes of the research team in this space, and planning for close collaboration with Cisco engineers engaged in IETF activities.

Yogendra Joshi, Georgia Tech
Experimental and Theoretical Evaluation of Alternate Cooling Approaches to 3D Stacked ICs and Their Implication on the Electrical Design and Performance

Cooling is a well known critical bottleneck in 3D ICs. Moreover, the cooling solution itself has a large implication on the 3D electrical design of the chips and packages. The proposed research by Drs. Joshi and Bakir will explore the design and technology implications of two different cooling solutions for 3D architectures, and define their limits and opportunities. Namely, we are interested in 1) conventional air cooling, and 2) the use of single-phase pumped liquid cooling using microchannels/pin fins. This work will be accomplished via both experimental test beds and compact physical modeling.  The modeling will take into account TSV density, chip thickness, and the thermal resistance of the interface between ICs. We will also do "first order" measurements exploring the impact of chip temperature rise on the TSV and on-chip interconnect electromigration and resistance, and the subsequent impact on latency and andwidth. This will be the first time that physical modeling and experimental data are used to benchmark the thermal and electrical performance of 3D ICs based on the choice of cooling technology, accounting for hot spots and exploring both steady state and transient effects. 

Li Chen, University of Saskatchewan
SEE/Noise Study on a Power Controller and TCAM

The proposal is to study the single event effects and noise in a digital power controller and TCAM. These devices were identified sensitive to neutron/noise in qualification test.

Cisco believed that it could be a general issue for industry. In order to let industry aware of the issue, understand the underline physics, mechanism and hence to improve the future design, they decided to follow up by moving it into a fundamental research with my university group. Our research group has the capabilities of laser measurement equipment, circuit level simulation, 3D-TCAD simulation, and most importantly, we have the established expertise in this area. We would like to combine the pulse laser and simulation tools to conduct the research, specifically noise study using pulse laser is a new approach in research field. We expect to publish several papers in the next 1-2 years. It will demonstrate Cisco's visibility in this area and also provide my research group content and funding for graduate students.

Diana Marculescu, Carnegie Mellon University
Modeling and Mitigation of Aging-Induced Performance Degradation

Negative Bias Temperature Instability (NBTI) is a transistor aging phenomenon that causes degradation in digital circuit performance with time. It occurs when PMOS transistors are stressed under negative bias at elevated temperatures, leading to an increase in transistor threshold voltage and correspondingly, a decrease in its switching speed. Similarly, Positive BTI (PBTI) affects NMOS transistors when being stressed under positive bias and manifests itself as a reduction in circuit performance as well. Another key factor in determining long-term circuit reliability is Hot Carrier Injection (HCI), which induces performance degradation by decreasing the driving strength (drain current) of both PMOS and NMOS transistors. As opposed to BTI that highly depends on the probability of transistors under stress (i.e., stress probability), the rate of degradation due to HCI is correlated with the frequency of transistor switching (i.e., switching activity). As technology scales, higher operating frequencies and on-die temperatures, and smaller transistor geometries have resulted in these aging effects becoming an increasingly dominant contributor to decreased system reliability. The work proposed herein addresses the impact of aging mechanisms including BTI and HCI, by developing efficient and cost-effective techniques for modeling and mitigation in the context of aging-induced performance degradation and associated timing failures.

Bharat Bhuva, Vanderbilt University
Soft Error Response of 3D IC Technologies

3D IC technologies integrate multiple die in the same package for increased performance.  However, the resultant increased transistor density may increase the soft error FIT rates beyond that of cumulative FIT rates of individual die.  This prospect of increased FIT rates must be investigated to assure that designers do not end up with reduced system-level performance.  This proposal is for the evaluation of soft error rates for 3D IC technologies.  The technology characterization will be carried out using 3 tiered 3D IC technology for single-event transients and charge collection measurement to quantify FIT rates.

Nelson Morgan, The International Computer Science Institute
Robust recognition of speech from impromptu videos

Given our current cortically-inspired "manystream" front end, which has successfully reduced recognition errors on artificially degraded speech signals, we will experiment with recognition of speech from impromptu videos. We will then improve the system further by experimenting with alternate auditory representations to replace the log mel spectrum as the time-frequency representation that is further processed by our manystream nonlinear filtering module. All methods will be compared with a baseline method that was specifically developed for robustness to additive noise, namely the ETSI-standard AFE.

Guohong Cao, The Pennsylvania State University
Supporting Mobile Web Browsing with Cloud

Smartphone is becoming a key element in providing greater user access to the mobile Internet. Many complex applications, which are used to be only on PCs, have been developed and run on smartphones.  These applications extend the functionalities of smartphones and make them more convenient for users to be connected. However, they also greatly increase the power consumption of smartphones and many users are frustrated with the long delay of web browsing when using smartphones. We have discovered that the key reason of the long delay and high power consumption in web browsing is not due to the bandwidth limitation most of time in 3G networks. The local computation limitation at the smartphone is the real bottleneck for opening most webpages. To address this issue, we propose an architecture, called Virtual-Machine based Proxy (VMP), to shift the computing from smartphones to the VMP which may reside in the cloud. To illustrate the feasibility of deploying the proposed VMP system in 3G networks, we propose to build a prototype using Xen virtual machines and Android Phones with ATT UMTS network to design and realize the goal of the VMP architecture. We will address the scalability issues, and optimize the performance of the VMs on the proxy side.  We will consider compression techniques to further reduce the bandwidth consumption and use adaptations to address the poor network conditions at the smartphone.

Amit Roy-Chowdhury, University of California, Riverside
Optimizing Video Search Via Active Sensing

Large networks of video cameras are being installed in many applications, e.g., surveillance and security, disaster response, environmental monitoring, etc. Since most existing camera networks consist of ?xed cameras covering a large area, it results in situations where targets are often not covered at the desired resolutions or viewpoints, thus making the analysis of the video dif?cult. For example, if we are looking for a particular person, we may encounter the problem that the number of pixels on the face is too few (i.e., very low resolution) for face recognition algorithms to work reliably. In this project, we propose a novel solution strategy that would integrate the analysis and sensing tasks more closely. We call this active sensing. This can be achieved by controlling the parameters of a PTZ camera network to better ful?ll the analysis requirements, thus allowing for maximal utilization of the network, providing greater ?exibility while requiring less hardware, and being cost ef?cient.  In most work on video, the acquisition process has been largely separated from the analysis phases. However, if we think how humans capture videos, we try to position the cameras so as to get the best possible shots depending upon the goal. In large networks, it is infeasible to do this manually and hence it is necessary to develop automated strategies whereby the acquisition and analysis phases are tightly coupled. 

Alan E. Willner, University of Southern California
Flexible, Reconfigurable Capacity Output of High-Performance Optical Transceivers Enabling Cost-Efficient, Scalable Network Architectures

To meet the explosive capacity growth in future networks, it is crucial to advance the existing network architecture to support the dramatic increase in the transceiver data rates and the effective utilization of the network bandwidth. State-of-the-art transceivers provide 100-Gbit/s Ethernet per channel and fall short of the ultra-high rate of 1 Tbit/s to meet the expected growth. Two specific challenges remain: (a) the large discrepancy between high-rate and low-rate data channels, such that the large capacity of a single data channel from the transceiver will often not be required and be under-utilized, and (b) the large capital investment in ~Tbit/s terminal transceivers will often not be efficiently utilized. A laudable goal would be to create new transceivers and network architectures with extremely large capacity that can be tailored and shared to enable reconfigurable bandwidth allocation in future heterogeneous networks.

We propose to develop new transceivers and flexible and scalable network architectures by using optical nonlinearities to uniquely encode the amplitude/phase of the data and to channelize the data into multiple channels for the efficient capacity utilization of the future network architecture. We will determine the performance trade-offs and potential network architecture gains and explore methods for flexible use of data rates up to 100 GBaud and high data constellations up to 256-QAM levels on two polarization states.

Constantine Dovrolis, Georgia Institute of Technology, College of Computing
A novel traffic shaping mechanism for adaptive video streaming

During the course of a previous Cisco-funded research project, we discovered that there is a fundamental problem with adaptive video streaming applications. When more than one adaptive video players share the same bottleneck, the players can get into an oscillatory behavior in which they keep requesting different video bitrates. This behavior can also cause significant unfairness between players, as well as inefficient bandwidth utilization at the bottleneck. Further, this problem cannot be resolved simply by modifying the player's adaptation logic.

The main objective of the proposed research is to solve the previous problem in a way that will ensure the stability, fairness and efficiency of adaptive video streaming. The specific mechanism that we plan to investigate is an adaptive and probabilistic traffic shaper employed at the video servers. We will also investigate the interactions between the proposed shaping mechanism and various existing or proposed rate adaptation algorithms at the client.

Weidong Xiang, University of Michigan, Dearborn
Prototype for Wireless Access in Vehicular Environments (WAVE) Systems

Wireless access in vehicular environments (WAVE) is the most state-of-the-art dedicated short-range communications (DSRC) technology, which is proposed to provide vehicles with high-speed vehicle-to-vehicle and vehicle-to-roadside wireless links. WAVE systems employ advanced orthogonal frequency-division multiplexing (OFDM) modulation scheme and can achieve data rates of 6-27Mbs/s. WAVE systems operate in the spectrum band of 5.850-5.925GHz assigned by the Federal Communication Commission (FCC) and a roadside unit (RSU) can cover a range of up to 1000 feet. There are three main types of applications for WAVE systems: safety enhancement, traffic management and communications. Some examples are adaptive traffic light control, traffic jam alarm, collision avoidance, toll collection, vehicle safety services, travel information and commerce transactions. WAVE systems not only facilitate the implementation of intelligent transportation system (ITS) but also provide vehicles with Internet access for information transmission.

The Department of Transportation (DOT) plans to install a large number of RSUs in the main roads and highways of United States to make the WAVE services available. WAVE systems have emerged as a complement as well as a competitor to existing cellular mobile communications systems by offering high-speed wireless access services with small latencies. The ABI Company, a technology market research predictor, has announced huge WAVE manufacturing and service market in the near future. WAVE generates a fresh information technology industry on vehicle basis and will be deployed worldwide. The magnitude and degree of the impacts of WAVE systems on the economy, society and our daily life are substantial, multi-layered and profound.

This proposal seeks for the support from CISCO Research to complete a prototype for WAVE systems consisting four onboard units (OBUs) and two RSUs within one year. The objective of this proposal is to develop and realize a full functional real-time wireless access for vehicular environments (WAVE) prototype based on the IEEE 802.11p standard. Upon completion, the prototype will become the first, or one of the first functional WAVE prototypes from both academia and industry. With the help of the prototype, in this project the PI will present 1) a 5.9GHz mobile channel model based on extensive experiments, which will provide a solid foundation for the theoretical study and engineering of WAVE systems; and 2) a WAVE system simulator, which will be significantly beneficial to the research and development of WAVE systems. Such prototypes can be used as WAVE development kits for research and development, OBUs/RSUs for building up WAVE testbed, reference models for WAVE application specific integration circuits (ASIC) chip design.

João Barros, Instituto de Telecomunicações, Universidade do Porto
PredictEV – Mobile Sensing and Reliable Prediction of Electric Vehicle Charging in Smart Grids

The PredictEV project will deliver key technical advancements and a proof-of-concept for smart grid sub-systems capable of exploiting the real-time data provided by ICT-enabled electrical vehicles (EVs) and city-scale mobile sensing to predict where, when and how charging will take place in urban power systems. The project shall leverage a city-scale mobile sensor network supported by a IEEE 802.11p and IPv6 compliant vehicular ad-hoc network with 500 GPS enabled nodes that is currently under deployment in Porto, Portugal.

This urban scanner is expected to deliver several gigabytes of data per day, which will be processed by a cloud computing facility to support low-carbon mobility and advanced grid management services. The focus of this project is on the distributed prediction and dissemination modules required for the efficient integration of EVs and charging stations as part of a complete smart grid solution that is embedded in the urban fabric.

Using specific data sharing protocols to be developed in this project, each EV will be able to communicate through the existing vehicular network and keep the grid infrastructure informed about its position, battery status and other relevant parameters. Our modules shall fuse this data with the traffic values from the urban scanner and a-priori information on the EV's mobility patterns to deliver distributed, precise and localized predictions of the electricity demands at various points of the grid at any given moment in time. Such predictions can be used to manage distribution grids, while enabling the recommendation of strategic charging opportunities for individual EVs based on real-time information.

The project shall explore the synergies of two teams hosted by the University of Porto, the Power Systems Group at INESC Porto and the Networking and Information Security Group at Instituto de Telecomunicações - Porto, who together have extensive expertise in vehicle-to-grid systems, distribution networks, intelligent transportation systems, communication protocols, security and information theory, among other areas. Both teams are also involved in and have access to the Portuguese electric mobility platform mobi.e, which plans to have 1350 charging stations already deployed by the end of 2011. Close cooperation is also ensured with Living PlanIT, a company currently planning to construct a smart city called PlanIT Valley in close vicinity to Porto. We believe PredictEV will contribute to a smarter electric mobility infrastructure in which utilities and city managers find it simpler to predict EV demands and mobile customers will enjoy reliable charging services that are readily available at affordable costs.

Grenville Armitage, Swinburne University of Technology
Using delay-gradient TCP to reduce the impact of "bufferbloat'" and lossy wireless paths

Traditional loss-based TCP congestion control has two significant problems -- it performs poorly across paths having non-congestion related packet losses, and it relies on filling and overflowing network buffers (queues) to trigger congestion feedback. Delay-based TCP algorithms utilise trends in RTT estimates to infer the onset of congestion along an end to end path. Such algorithms can optimise their transmission rates without inducing packet losses and keep induced queuing delays low. Delay-based TCP is thus attractive where a variety of loss-tolerant and loss-sensitive traffic flow over a mix of fixed and wireless link layer technologies.

We believe practical delay-based TCP algorithms can help work around the emerging problem of excessively-large queues in deployed infrastructure ("bufferbloat"). However, past delay-based TCPs have been overly-sensitive to packet loss and insufficiently aggressive in the presence of loss-based TCP flows. Enabling delay-based TCPs to coexist with loss-based TCPs is a significant challenge to real-world deployment.

Our previous work on delay-threshold and delay-gradient TCP congestion control has shown improved loss-tolerance and goodput in single-hop, intrinsically lossy link environments. We propose to evaluate more complex multi-hop multiple bottleneck scenarios (particular when exposed to interfering cross-traffic using NewReno, CUBIC and CompoundTCP algorithms), investigate the auto-tuning of relevant parameters and thresholds, and investigate the possibility of a hybrid delay-threshold and delay-gradient mechanism.

Our algorithms will be publicly released as BSD-licensed patches to the FreeBSD networking stack, intended for real-world deployment.

Wei Mao, Computer Network Information Center, Chinese Academy of Sciences
Integrated IPSec Security Association Management with DHCP

IPSec [RFC4301] is pervasive in Customer Premises Networks (CPN) to secure the communication between the user host and the service provider of the Customer Premises Network, DNS recursive name server for instance. IPSec Security Association (SA) is a "connection" that provides security services to the traffic carried by it and details the key and algorithm that the IPSec connection relies on. A SA can be established with manual configuration by the administrator. However, the SA is IP address oriented and cannot function after the change of the involved IP addresses, which means extra SA parameters configuration is inevitable after renumbering the network or when the host is roaming.

IKEv2 [RFC5996] eliminates manual reconfiguration. It employs two rounds of message exchanges. The first exchange of messages establishes a trust relationship between the two IKEv2 participants. This exchange uses two authentication methods, digital signature and pre-shared key. The former needs certificate management. If the roaming host is not configured with the trust anchor that the CPN uses, certificate-based IKEv2 cannot be realized. While the later still relies on preconfigured trust relationship and pre-shared keys are tied to particular IP addresses. Pre-shared key method cannot be used with mobile systems or systems that may be renumbered, unless the renumbering is within the previously determined range of IP addresses. Also, when pre-shared key is employed, IKEv2 computations cannot be offloaded to the attached hardware. Therefore, pre-shared key method fails to be a pure automatic one and does not go well with dynamic networking.

DHCP [RFC2131 & RFC3315] allows a computer to be configured automatically, eliminating the need for intervention by a network administrator. Since DHCP is responsible for configuring the IP address and IPSec SA is IP address oriented, we propose to extend DHCP to integrate IPSec SA establishment course into the DHCP message exchange reinforced by other interaction, in order to dynamically configure SA parameters shared between the user host and the service provider of the Customer Premises Network.

Yuzhuo Fu, Shanghai Jiao Tong University
Virtualization for NoC fault tolerance

IC technology drives NoC(Network on Chip) to dominate in many-core communication. Another effect of shrinking feature size is that power supply voltage and device Vt decrease, and wires become unreliable because they are increasingly susceptible to noise sources such as crosstalk, coupling noise, soft errors, and process variation. Therefore, NoC fault tolerance, especially soft error, becomes a challenge for VLSI designers. In this proposal, from a new view we introduce virtualization technology to achieve these targets: 1) Meet different fault tolerance requirements through HW&SW cooperation on basic hardware platform; 2) Hide error detection, mask and recovery for effective error management; 3) Reduce performance loss or overhead from virtualization by certain HW auxiliary. Focusing on above goal, the complete FTVL (Fault Tolerance Virtualization Layer) is planned as a reliable middleware for NoC.

Sachin Katti, Stanford University
Full Duplex Wireless

We propose to research full duplex wireless, a novel technology with which radios can simultaneously receive and transmit on a single band. Our earliest prototypes required 3 antennas. Current ones require 2. The proposed research has three major topics: developing a single antenna design (and thereby enabling full duplex MIMO), algorithms for adaptively tuning the full duplex circuits, and using full duplex radios to build easy-to-design WiFi-direct devices and wireless relays.

Amin Vahdat, University of California, San Diego
A Virtualized Service Infrastructure for Global Data Delivery

This project will investigate novel models for delivering application functionality and virtualized storage across an ensemble of cloud data centers spread across the Internet. The work proposes naming data objects as the fundamental mechanism for naming resources, rather than the traditional approach of employing IP addresses. On top of this base abstraction, we will investigate two inter-locking issues. First, we will investigate various consistency models for the underlying data. Given limited reliability of individual Internet paths, we argue that data replication across multiple data centers is fundamental to achieving high levels of availability (five nines). In the face of such replication, a critical question is the model---and the required network support---for delivering a range of consistency semantics to applications and clients making use of the data. Second, we will investigate API's and implementation techniques for layering a range of data structures on top of the underlying object store. For instance, we will explore the ability to explore replicated, consistent queues, trees, hash tables, graphs, etc. as the fundamental building block for application construction and communication.

Rolf Stadler, KTH Royal Institute of Technology
Network Search for Network Management

Our goal is to engineer a network search functionality that can be understood as "googling network data in real-time." For instance, the search query "find information about the flow <src, dest, app>" would return a list of systems that carry the flow in their caches, the interfaces the flow uses, the list of firewalls the flow traverses, the list of systems that impose traffic policies on this flow, etc., and possibly even the flow's current QoS metrics. Network search is complementary to monitoring, whereby state information and monitoring variables are not explicitly addressed by network location but characterized as content. The idea is that the search will be performed on static and dynamic data, on current as well historical information. To achieve scalability and fast completion times for search on operational data, network search has to be performed "inside the network," supported through a decentralized, self-organizing architecture. We envision that the capability of network search will enable novel and powerful network management functions. First, it will spur the development of  new tools that will assist human administrators in network supervision, maintenance and fault diagnosis. Second, we believe that a new class of network control and service functions will emerge which will use a network search engine.

Larry Smarr, University of California, San Diego
Integration of Cisco Telepresence with OptIPortals

Cisco TelePresence provides corporations with a complete, high-quality high-definition video teleconferencing system. It consists of one-to-three panel high-definition displays, with an additional screen for PowerPoint slides, and uses excellent proprietary technology for codecs and runs over telecom provided dedicated DS3 circuits. Cisco TelePresence, however, does not provide academics or government and corporate decision makers with combined video teleconferencing and advanced data displays that cover entire walls, that provide advanced interactive capabilities, and that run over the National LambdaRail (NLR). We plan to integrate Cisco TelePresence with the NSF-funded OptIPuter software and display technology, to prototype a system for digital collaboration among researchers in universities, government, and industry that can be rapidly expanded to data-intensive researchers at campuses connected to NLR. A key motivation for this initiative would be to provide a prototype that will be oriented to requirements that must be met in the next few years. The current TelePresence model is based on common services and technology readily available today at almost all fairly well connected locations. The OptIPuter reflects emerging advanced capabilities that can be implemented at selected sites using specialized technologies and configurations. This proposed project would demonstrate both how the OptIPuter techniques can significantly enhance the current TelePresence environment and how this environment can be extended to meet emerging future requirements -- reflecting capabilities anticipated in the next few years. Such capabilities will include much higher capacity, performance, and quality of communication services.

Kenneth E. Goodson, Stanford University
Power and Thermal Engineering of Microprocessors & ASICS in Router Systems

Router and switching systems face mounting challenges relating to power consumption, noise, and reliability.  Power consumption is comprised of the delivery of high power to drive devices and an almost equal amount of heat to be removed.  Much progress can be made by leveraging the improved thermal management technologies developed for the electronics industry over the past decade, which may enable significant reductions in fan power consumption and noise at projected chip power generation levels.  This work quantifies these potential benefits by developing electrothermal simulations of chips and chip networks and by helping CISCO develop a systematic chip temperature measurement approach.  In a parallel effort, this work will examine the power distribution network in CISCO network chips and study techniques to modulate and optimize the delivery of energy. The proposed work will be a joint collaboration between the Mechanical Engineering and Electrical Engineering Departments of Stanford University.

Chava Vijaya Saradhi, Center of REsearch And Telecommunication Experimentation for NETworked communities (Create-Net)
Dynamic Mesh Protection/Restoration (Protectoration) in Impairment and 3R Regenerator Aware GMPLS-Based WDM Optical Networks

Recent evidence by Sprint [1] suggests that multiple simultaneous component failures (or frequent fiber cuts due to construction activity in emerging countries and varying repair times) in IPoDWDM networks are more common than expected; in fact, as many as 6 simultaneous component failures were detected. Given the growing network size and the increasing economical impact of failures to meet pre-agreed and stringent SLAs, network operators cannot rely on simple single-link failure models. Henceforth, availability/reliability analysis based on multiple failure models has started to attract significant attention. In addition, the constraints due to various optical component technologies that are deployed in the transport backbone networks (e.g. fixed OADM, 2-degree ROADMs, N-degree ROADMS, Y-cables, protection switching modules [PSMs]) [2] and constraints on optical reach due to physical layer impairments (PLIs) [3, 4] and switching constraints due to fixed or colorless & omni-directional components; availability of regenerators [5, 6] and tunable transponders [2] need to be considered by the dynamic protection/restoration algorithms, without which it is impossible to realize dynamically reconfigurable and restorable translucent optical networks. Most of the work in the literature assumes ideal and homogeneous physical layer components/links and their characteristics; and hence these algorithms can not be applied directly to the production networks. In this project, we plan to develop novel dynamic single/multiple link/node-failure recovery techniques based on the co-ordination of management and control planes in the context of GMPLS-based WDM optical networks considering the availability of different optical components including 3R regenerators/tunable transponders and their constraints together with the limitations on the optical reach due to PLIs, by leveraging the work we have pursued and experience we have gained through several related past projects [3-6].

Chava Vijaya Saradhi, Center of REsearch And Telecommunication Experimentation for NETworked communities (Create-Net)
Dynamic Mesh Protection/Restoration (Protectoration) in Impairment and 3R Regenerator Aware GMPLS-Based WDM Optical Networks

Recent evidence by Sprint [1] suggests that multiple simultaneous component failures (or frequent fiber cuts due to construction activity in emerging countries and varying repair times) in IPoDWDM networks are more common than expected; in fact, as many as 6 simultaneous component failures were detected. Given the growing network size and the increasing economical impact of failures to meet pre-agreed and stringent SLAs, network operators cannot rely on simple single-link failure models. Henceforth, availability/reliability analysis based on multiple failure models has started to attract significant attention. In addition, the constraints due to various optical component technologies that are deployed in the transport backbone networks (e.g. fixed OADM, 2-degree ROADMs, N-degree ROADMS, Y-cables, protection switching modules [PSMs]) [2] and constraints on optical reach due to physical layer impairments (PLIs) [3, 4] and switching constraints due to fixed or colorless & omni-directional components; availability of regenerators [5, 6] and tunable transponders [2] need to be considered by the dynamic protection/restoration algorithms, without which it is impossible to realize dynamically reconfigurable and restorable translucent optical networks. Most of the work in the literature assumes ideal and homogeneous physical layer components/links and their characteristics; and hence these algorithms can not be applied directly to the production networks. In this project, we plan to develop novel dynamic single/multiple link/node-failure recovery techniques based on the co-ordination of management and control planes in the context of GMPLS-based WDM optical networks considering the availability of different optical components including 3R regenerators/tunable transponders and their constraints together with the limitations on the optical reach due to PLIs, by leveraging the work we have pursued and experience we have gained through several related past projects [3-6].

Krishnendu Chakrabarty, Duke University
Fault-Isolation Technology for System Availability and Diagnostic Coverage Measurement

The proposed work is on optimized fault-insertion (FI) technology for understanding the impact of physical defects and intermittent/transient faults on chip, board, and system operation of electronic products. It includes theoretical research on optimization, statistical inference, self-learning theory, etc., for fault diagnosis, as well as evaluation using public-domain benchmarks. Research results will be disseminated through conference and journal publications.

Massimiliano Pala, Dartmouth College
Portable PKI System Interface for Internet Enabled Devices and Operating Systems

In a 2003 analysis of a PKI survey OASIS identified that the three most urgent obstacles for PKI wide adoption are: the high cost of PKI setup and management, usability issues (both at user and management levels), and the lack of support for PKI technology in applications. In particular, a major issue when developing applications is the absence of a standard, well-designed, and easy-to-use API for PKI operations. Subsequent work from OASIS in 2005 showed that not much has changed since 2003. Today, developing PKI-enabled applications is still difficult. When it comes to using digital certificates in network devices like routers or access points, integrating PKIs functionalities is even harder because of limited capabilities or complex programming interfaces. Thus, providing a usable interface for the final user (i.e., the network administrators) is extremely difficult.

This project will address PKI usability and complexity issues by development of a standard API and an associated abstraction. More specifically we will focus on portability and usability of PKI applications. In order to achieve portability, we will standardize the API specifications by writing a new Internet Draft. This public document will allow other developers, cryptographic libraries providers, or Operating System vendors to implement our API across different systems (eg., computers, network equipment, portable devices, sensors, etc.). Regarding usability, we will focus on making PKI operations easy for the developer. We will leverage our extensive experience (both in the academic and in the "real" world) in order to raise the "usability-bar" of PKIs and lower software development costs associated with digital certificates usage.

Project Objectives:
We propose to study, design and implement an highly usable API for PKI operations. More specifically we will investigate, develop and standardize a new API specification that will enable developers to easily integrate and/or implement PKI functionalities in  Applications and Operating Systems. In contrast to existing all-purpose cryptographic libraries, our work will focus on providing an abstraction layer capable of integrating existing protocols (e.g., SCEP, PRQP) into simple high-level PKI-specific calls. In particular, we will simplify the  interactions between End-Entities and Certification Authorities and how to integrate authorization information processing in secure communication between peers. By analogy, this project will provide the same benefits in portability and usability as the POSIX interface provided for modern Operating Systems.

Expected Outcomes:
As a result, our work will help to lower costs associated with the adoption of digital certificates and public key cryptography for authentication purposes, especially Internet-connected devices. Moreover this work will allow PKI adopters to focus more on the applications themselves, where security benefits and return on investment are easier to demonstrate.

Raymond Yeung, The Chinese University of Hong Kong
Two Problems in Network Coding

We propose to work on two problems in network coding.  The first problem is a fundamental problem that extends the theory of network coding from block code and convolutional code to variable-length code.  This extension has the potential of combining source coding and network coding for joint optimization.  The second problem is to study the distributed scheduling and replication strategies of a peer-to-peer system for content distribution, including coding based strategies.  The result should be useful for content distribution, whether streaming or VoD, which is projected to be the predominate usage of network bandwidth in the coming years.

Qiang Xu, The Chinese University of Hong Kong
Physical Defect Test for NTF with Minimized Test Efforts

No Trouble Found (NTF) devices refer to those devices that are returned from system customers as having failed in-field but refuse to fail with a retest on automatic test equipment (ATE). Such phenomenon is one of the most serious challenges for test engineers because it is very difficult, if not impossible, to identify the root cause for the problems of such devices without being able to repeat the failure. To tackle this problem, in this project, we plan to investigate the fundamental reasons for the occurrence of NTF phenomenon and develop novel layout-aware testing techniques that enable the circuit under test (CUT) to behave as the worst-case operational states in its functional mode. By doing so, we can mimic the behavior of the CUT close to its functional operations at ATE, thus dramatically reducing the NTF rates for returned defective devices.

Jacob A. Abraham, The University of Texas at Austin
Mapping Component-Level Models to the System and Application Levels

Complexity is the primary stumbling block to the e®ective analysis of computer systems. Thus, analyzing complex systems, including Systems-on-Chip (SoCs), for veri¯cation, test generation or power consumption is exacerbated by the increasing complexity of integrated circuits. In addition, technology trends are increasing the complexity of the basic component models in a system. The result is that most tools can only deal with relatively small components in the design. This proposal is based on our novel techniques for mapping module-level constraints to the system and application levels. We have applied them to enable at-speed tests which target desired faults in a module (such as stuck-at faults or small delay defects), and which can be applied from the system-level inputs without relying on additional on-chip circuitry. Recent research has used the ideas to ¯nd functionally valid system-level vectors which produce peak power in an embedded module. These techniques have been applied to instruction-set processors, and will be extended to enable other targets, such as application-speci¯c systems, to be analyzed e®ectively. Other problems addressed will include abstractions to manage the complexity of test generation, measures of coverage, and architectural support for the mapping process.

James Martin, Clemson University
Broadcasting Video Content in Dense 802.11g Sports and Entertainment Venues

 Wireless communications is changing how humans live and interact.  Evidence of this evolution is observed by the widespread deployment and usage of 802.11, 802.16, and 3G systems.  Multimodal devices such as the iPhone and the Droid are unifying 802.11 and 3G networks.  A particularly interesting application domain that demonstrates the rapid developments in wireless technology is sports and entertainment. Video, such as live broadcast video, is considered to be the application that most engages users (fans or patrons) in a sports and entertainment venue. Deploying this capability at a stadium to support hundreds to thousands of users is challenging.  A promising approach for 'robust' delivery of video content to a large number of wireless stations is to multicast the data in conjunction with application level forward error correction (AP-FEC) based on Raptor codes.  The proposed research will provide theoretical and simulation-based results that will help the academic community better understand the feasibility, limitations, challenges and possible solutions surrounding large-scale deployment of video broadcast in extremely dense 802.11 deployments when using modern handheld devices and state-of-the-art application coding strategies.

Priya Narasimhan, Carnegie Mellon University
YinzCam: Large-Scale Wifi-Based Mobile Streaming in Sports Venues

Yinz Cam is the first-of-its-kind indoor wireless service that targets live video streaming to the wifi-enabled cellphones of a high-density, often-nomadic, large user population. Designed to enhance the fan's viewing experience at sporting events, Yinz Cam allows many (potentially thousands of) fans at the event to (1) select from, and view, high-quality live video feeds from different camera angles in real-time, where the cameras are positioned at various strategic points in the arena, e.g., the net-cam in ice hockey, the line-judge cam in football, (2) view action replays from the current event and previous/related events, and (3) participate, in real-time, in any trivia games or ongoing live polls during and after the event. The research challenges involve scalable, high-quality, real-time, interactive video and multimedia delivery over a wireless network within the arena, to a large number of handheld mobile devices or touch-screens placed throughout the arena.

Yinz Cam has been successfully piloted since November 2008 within Mellon Arena, the home of the Pittsburgh Penguins, an NHL professional hockey team. Mellon Arena seats 17,000+ fans, and the Pittsburgh Penguins have offered the PI's research team (within Carnegie Mellon's new Mobility Research Center) the unprecedented opportunity of deploying the Yinz Cam technology live at Mellon Arena, and offering it to fans throughout the 2008-9 and 2009-10 hockey seasons. Yinz Cam has already been tested at 60+ Penguins home games in 2009, with multiple phone models (Nokia, Samsung, Android, IPod/IPhone, Blackberry), with multiple live, broadcast-quality video-camera angles, and has received press coverage and support from the Pittsburgh Penguins. The purpose was to validate the technology sufficiently with real users so as to deploy Yinz Cam permanently at full scale in the Consol Energy Center, the new arena for the Pittsburgh Penguins as of Fall 2010.

The research collaboration with Cisco will allow us to jointly explore the issues underlying this large-scale, high quality delivery of video, to gather data on the usage, performance, quality-of-service and reliability issues of such a large-scale deployment, and to explore how Yinz Cam can be extended with location-aware and social networking services to further heighten a fan's experience at a game. More importantly, because of Yinz Cam's real-world deployment, this project will also provide Cisco with an invaluable opportunity to (1) discover and analyze the usage patterns and interests of mobile video users at sporting events, (2) understand the issues involved in supporting infrastructural and wireless-network performance and scalability for high-quality video, and (3) push the limits of current Cisco technologies to support such interactive, user-centered, mobile video applications. The data from Yinz Cam's deployment, the solutions that we innovate, and the lessons learned, will have both business value and research impact.

John Kubiatowicz, University of California, Berkeley 
CISCO Affiliation for the Berkeley Parallel Computing Laboratory:  Reinventing Systems Software and Hardware for ManyCore systems

We propose to make CISCO an affiliate of the Berkeley Parallel Computing Research Laboratory (ParLAB). The primary charter of the ParLAB is to reinvent parallel computing in an era in which manycore processors will be increasingly prevalent-both in servers and client machines. The ParLAB approach involves an innovative combination of new software engineering techniques, better parallel algorithms, automatic tuning, new operating systems structures, and hardware support. The focus of ParLAB research lands in the center of many of CISCO's interests, including the structuring of software and hardware aspects of Internet routing. We introduce the two-level partition model and the Tessellation OS.

Lu Yu, Zhejiang University
Moving Object Detection Based Adaptive 3D Filtering and Video Coding Optimization

Noise in video image could always be introduced by camera. If ordinary camera is used in a normal unlighted office environment, noise will be more significant and affect coding efficiency more. Filtering the video might improve coding efficiency but will also introduce detail loss. Intelligent filtering to ensure high coding efficiency as well as good visual quality is valuable for video conferencing systems, especially when high resolution and high quality are required. In this proposal, we try to optimize video coding by intelligently using different de-noising filters and coding parameters according to the detection of moving objects and static background.

This project will focus on:
(1) Moving objects detection and tracking, including knowledge-based object detection and tracking;
(2) Content adaptive 3-dimentional filtering;
(3) Optimized video coding strategy, including mode selection and rate control.

Balaji Prabhakar, Leland Stanford Junior University
Stanford Experimental Data Center Lab

The Stanford Experimental Data Center Lab has a broad research agenda, spanning networking, high speed storage and energy efficiency. In networking, a central project is to design and build a prototype of a large (hundreds of ports), low-latency, high bandwidth interconnection fabric. Other major projects include rapid and reliable data delivery, and software-defined networking. Storage is viewed as existing largely, if not entirely, in DRAM servers; this is the focus of the RAMCloud project. The overall goal is to combine the RAMCloud project and the interconnection fabric project so that applications which access data can do so with a latency less than 2 to 3 microseconds. The projects of the lab involve many faculty and students at Stanford as well as Tom Edsell and Silvano Gai at Cisco Systems.

Mario Gerla, University of California, Los Angeles
Vehicular Network and Green-Mesh Routers

Vehicle communications are becoming a reality, driven by navigation safety requirements and by the investments of car manufacturers and Public Transport Authorities. As a consequence many of the essential vehicle grid components (radios, Access Points, spectrum, standards, etc.) will soon be in place enabling a broad gamut of applications ranging from safe, efficient driving to news, entertainment, mobile network games and civic defense. One of the important areas where this technology can have a decisive impact is traffic congestion relief and urban pollution mitigation in dense urban settings. Vehicles are the main offenders in terms of chemical and noise pollution. It seems fair that some of the research directed to vehicular communications be used to neutralize the negative effects of vehicular traffic. In this proposal we develop a combined communications architecture strategy for the control of urban congestion and pollution (the two components being closely related to each other). Vehicle to vehicles communications alone will not be sufficient to solve these problems. The infrastructure will play a major role in this process. In particular, the fight of traffic congestion and pollution will require a concerted effort involving car drivers, Transport Authority, Communications Providers and Navigator Companies. A key role in bringing these players together is played by the Green-Mesh, the wireless mesh network that leverages roadside Access Points and interconnects vehicles to the wired Infrastructure. In this project, we will design the Green-Mesh architecture that can deliver key functionalities, from safe navigation to intelligent transport, pollution monitoring and traffic enforcement. 

Colin Perkins, University of Glasgow
Architecture and Protocols for Massively Scalable IP Media Streaming

The architecture and protocols for massively scalable IP media streaming are evolving in an ad-hoc manner from multicast UDP to multi-unicast TCP. This shift has advantages in terms of control and deployability over the existing network, but costs in terms of increased server infrastructure, energy consumption, and bandwidth usage. We propose to study the nature of the transport protocol and distribution model for IPTV systems, considering the optimal trade-off for services that are scalable and economical while still supporting trick-play.  Specifically, we will consider novel approaches to enhancing TCP streaming, as compared with managed IP multicast services. We will evaluate different transport protocols and distribution models for IPTV systems, to understand their applicability and constraints, and make recommendations for their use. Thereafter, we will develop router based enhancements to TCP-based streaming, borrowing ideas from publish/subscribe and peer-to-peer overlays, to bring its scalability closer to that of multicast IPTV systems.

Sharon Goldberg, Boston University
Partial Deployments of Secure Interdomain Routing Protocols

For over a decade, researchers have been designing protocols to correct for the many security flaws in the Border Gateway Protocol (BGP), the Internet's interdomain routing protocol.  While there is a widespread consensus that the Internet is long-overdue for a real deployment of one of these security protocols, there has been surprisingly little progress in this direction.   The proposal argues a major bottleneck to the deployment of secure interdomain routing protocols stems from issues of partial deployment. Since the Internet is a large, decentralized system, any new deployment of a secure interdomain routing protocol will necessarily undergo a period where some networks deploy it, but others do not.   Thus, the challenge we address in this proposal is twofold; firstly, we look for a rigorous metric to understand the security properties of a protocol when it is partially deployed, and secondly, we model the economic incentives an AS has to deploy a secure routing protocol when it is partially deployed in the system.  Our objective is to address an issue that we believe is crucial for both Cisco, and the Internet at large; namely, to develop plans for adopting secure routing protocols in ways that are consistent with both the security requirements and the economics of the interdomain routing system.

Leonard Cimini, University of Delaware
MU-MIMO Beamforming for Interference Mitigation in Multiuser IEEE 802.11 Networks

The proposed work is a continuation of a successful project, initially focused on IEEE 802.11n beamforming (BF). Previously, we made significant progress in understanding fundamental issues in MIMO BF. Much of this work has been reported at several conferences and in three journal papers ready for submission. In the coming year, we propose to focus on interference mitigation for 802.11ac and for a radio-enabled backhaul network.  The enhanced capabilities promised by 802.11ac come with new challenges, in particular, the task of effectively managing the time, frequency, and spatial resources.  In addition to multiple antennas with multiple spatial streams, 802.11ac will permit a multiuser mode.  We will focus on transmit BF approaches, combined with receiver-based processing, to mitigate interference and maximize network performance. We also propose to build on our previous investigation of radio-based backhaul for a network of 802.11n (or 802.11ac) APs, with the goal of delivering a set of guidelines for the design of the radio backhaul network.

Robbert VanRenesse, Cornell University
Fault-Tolerant TCP Project

We propose to design, build, and evaluate Reliable TCP (RTCP), a service that supports TCP connections that persist through end-point failure and recovery.  Various projects have addressed this problem before.  Unfortunately, most are written for a client-server type of scenario in which the server is replicated, and do not support transparent recovery of client failures.  RTCP takes a radically different approach from prior work, leading to various salient features

* RTCP if fully symmetric, supporting the recovery of both endpoints;
* RTCP requires no modifications to the endpoints -- they run any operating system kernel and TCP stack of choice;
* RTCP buffers no application data;
* RTCP has little or no effect on throughput, and under common circumstances has lower latency cost than standard replication approaches.

E. William Clymer, Rochester Institute of Technology
A Preliminary Investigation of CISCO Technologies and Access Solutions for Deaf and Hard-of-Hearing

Faculty from the National Technical Institute for the Deaf (NTID), one of the eight colleges of Rochester Institute of Technology (RIT), with colleagues from other RIT colleges, will provide a team of experts to consider the application and adaptation of the ways in which CISCO products can benefit communication access for deaf and hard-of-hearing individuals. Three research strands are proposed:

* State of the Art and Recommendations Related to 911-411-211 Telephone Response Systems
* Evaluate Possible Use of Avatars to Enhance Direct Communication Support for Deaf and Hard-of-Hearing Users
* Evaluate TelePresence Technologies for Face-to-Face and Remote Communication for Deaf and Hard-of-Hearing Users

This project is a multi-year effort. During the first year, faculty at NTID and RIT will form teams, review the literature and state-of-the-art on CISCO access technologies, set up and test the various CISCO technologies in RIT/NTID laboratories, conduct focus groups and community demonstrations, gather and analyze expert opinion, and prepare whitepapers for each strand, including recommendations for future efforts in these areas. After the first year of work, the team will review progress and findings and determine which current or possibly new research would be most fruitful, and will make recommendations for continued research and development to CISCO.

The common thread among the three proposed strands will be the identification of local RIT experts in related fields to refine and focus their expertise on the specific telecommunication challenges facing deaf and hard-of-hearing persons. With RIT/NTID faculty expertise in deafness, access technologies, computers, engineering, software development and business and collaboration with appropriate individuals at CISCO, this specialized knowledge will be used to eventually evaluate CISCO technologies for potential enhancement and adaptation for the unique deaf consumer.

RIT offers a unique setting for research of this nature, as over 1600 deaf students study and live on a campus with over 16,000 hearing students. Additionally, expertise from within the other seven colleges of RIT (including Applied Science and Technology, Business, Computing and Information Sciences, Engineering, and Science) will augment expertise in the field of deafness and access technologies of NTID to consider how existing and emerging CISCO technologies can ultimately be better deployed to enhance the communication situation for deaf and hard-of-hearing persons.

Mohammad Tehranipoor, University of Connecticut
Effective Reliability and Variability Analysis of Sub-45nm Designs for Improving Yield and Product Quality

As technology scales to 45nm and below, reliability and variability are becoming the dominant design and test challenges to the semiconductor and EDA industry. Current timing analysis methodologies are not able to effectively address reliability and variability since they use foundry data as the basis for selecting critical paths which may not effectively represent process parameters for different designs with different structures. It is vitally important to have the capability of extracting variations at gates and interconnects placed at various metal layers as well as extracting aging data from silicon. Such extraction can be done on a selected number of dies from various wafers and lots. This information can be used to identify critical reliability and variability paths at time zero when the design is being timing closed and tested. Such variations will also be used for statistical analysis for reliability as well. New design strategies and effective guardbanding are developed to improve device performance, yield at production test, self healing and calibration, reliability in the field, and product quality as technology scales. Our main objectives in this project are: (1) sensitivity extraction and aging data collection using low-cost and accurate on-chip sensors, (2) identify critical paths to test performance of the chip during production test (time zero) considering process variations, and (3) identify paths that will become critical at time 't' in the field for performing accurate guardbanding and testing during manufacturing test and in the field using low-cost on-line test structures. This project also helps perform better performance verification, Fmax analysis, and model improvement.

Henning Schulzrinne, Columbia University
Mobile Disruption-Tolerant Networks

With the growing number of mobile devices equipped with wireless interfaces, mobile users increasingly find themselves in different types of networking environments. These networking environments, ranging from globally connected networks such as cellular networks or the Internet to the entirely disconnected networks of stand-alone mobile devices, impose different forms of connectivity. We investigate store-carry-forward networks that rely on user mobility to bridge gaps in wireless coverage ("disruption-tolerant networks"). The performance of routing protocols and mobile applications in such networks depends on the mobility behavior of human users. Using real-world GPS traces, we plan to investigate models that allow us to predict the future movement of data carriers, thus helping to greatly improve the performance of disruption tolerant networks.  The results are also likely to be useful for planning WiMax and 3G/4G networks.

Xi Li, Embedded System Lab, Suzhou Institution For Advanced Study of University of Science and Technology of China
Malware Detection for Smartphones

The goal of our project is to implement a malware detection system for smartphones. The system can detect malware accurately and efficiently while has a less resource use to adapt to the resource limitation of smartphones. Our research focus on the design and implementation of malware detection system which includes: 1)Security mechanism and risk of specific smartphone platform; 2)Behavior features of malware and normal software; 3)Algorithm of malware detection; 4)Solution of the resource limitation of smartphones.

Ben Zhao, University of California, Santa Barbara
Understanding and Using the Human Network to Improve the Internet Experience: An Accurate Model for Degree Distributions in Social Networks

With the availability of empirical data from online social networks, researchers are finally now able to validate models of social behavior using data from real OSN systems.  We find that the much accepted and relied on Power Law model provides a poor fit for degree distributions. More importantly, this poor fit results in a severe over-estimate of the presence of supernodes in the network, and provides an erroneous basis for researchers designing heuristics to NP-hard problems on large graphs. We propose a new statistical model that provides a significantly improved fit for social graphs. We will derive both statistical properties and a generative method for this model, and will use data from our study of over 10 million Facebook profiles to validate our approach.  Finally, we will revisit and revise solutions to key problems that based their heuristics on faulty assumptions of supernodes in the graph, including influence maximization, shortest path computation, and community detection.

Constantine Dovrolis, Georgia Institute of Technology, College of Computing
Robust Video Streaming over IP: Adaptive Transport and Its Instrumentation

Today, most Over-The-Top (OTT) video streaming services use HTTP/TCP. TCP however, with its flow and congestion control mechanisms as well as the strict reliability requirement, is far from ideal for video streaming. On the other hand, adaptive video streaming  over UDP has several desirable features: the ability to control the transmission rate in small timescales (without the constraints of window-based flow or congestion control), selective (or optional) retransmissions, additional protection for certain frames using Forward-Error-Correction (FEC), and network measurements using video packets. The main premise of the proposed research is that all these desirable features of UDP-based video streaming can also be gained using TCP, as long as the video streaming application deploys certain mechanisms that we will design, prototype and evaluate in this research.

Robert Daasch, Portland State University
Semiconductor Device Quality and Reliability

This proposal seeks funding support to improve the manufacture of Cisco products.  The proposal explores strategies to collect and analyze the test data from divergent sources (e.g. ASIC suppliers, contract manufacturers and Cisco). The results of the work are new ways to direct the test flow of application specific integrated circuits (ASICs), reduce the frequency of costly chip failures that appear late in the manufacturing flow and enhance the coordination between Cisco and its suppliers.  The proposed strategy reintegrates the data from component manufacturer wafer-sort and final-test to end-use system diagnostic tests (SDTs) into a Virtual Integrated Device Manufacturing (VIDM) environment.   The proposal presents new concepts such as the golden rejects as standards to be leveraged by the VIDM to control the integrated circuit manufacturing process.

Kirill Levchenko, University of California, San Diego
Identifying Malicious URLs Using Machine Learning

We propose to identify malicious URLs entirely based on the URL alone. The motivation is to provide inherently better coverage than blacklisting-based approaches (e.g., correctly predicting the status of new sites) while avoiding the client-side overhead and risk of approaches that analyze Web content on demand. We intend to apply Machine Learning techniques classify sites based only on their the lexical and domain name features. The major advantage of this approach is that, in addition to being safer for the user, classifying a URL with a trained model is a lightweight operation compared to first downloading the page and then using its contents for classification. In addition to evaluating this approach, we also intend to tackle issues of practical deployment, namely scalability, and the complexity of updating the features.

Steve Renals, The University of Edinburgh
Robust Acoustic Segmentation for Conversational Speech Recognition

Robustness across different acoustic environments and different task domains is a key research challenge for large vocabulary conversational speech recognition.  Within the AMI and AMIDA projects we have developed a speech recognition system (AMI-ASR), primarily targeted at multiparty meetings, which has demonstrated state-of-the-art accuracy and close to real-time performance.  We have recently tested this system on a variety of speech recordings (including podcasts, recorded lectures and broadcast TV programmes).  The results indicated that the core acoustic and language models remain accurate across these different environments and domains (although obviously domain specific training/adaptation data helps).  However, a large number of the observed errors come from problems in the acoustic segmentation module.  These errors are primarily caused by segments of non-speech audio being decoded as speech by the ASR system.

In this project we plan to develop acoustic segmentation modules that are robust across domains and across acoustic environments, and which can also operate with low latency.  Our approach will be to develop methods that combine multiple classifiers and acoustic features.

During the project we will experiment on a variety of data including meeting recordings from a variety of sources, podcasts and lectures.  The final evaluation of the system will be in terms of ASR error rate, and will use the project development data as well as any ``surprise'' data that Cisco may wish to provide.

Xuehai Zhou, Embedded System Lab. , Suzhou Institution For Advanced Study of University of Science and Technology of China
Mobile Cloud Computing

 Along with the rapid development of smart phones and wireless network, we believe more and more services will emerge in mobile device. However, providing these services by the traditional service infrastrucuture is often difficult, or even impossible. In this respect, adapting current cloud service platform to mobile devices will be a solution for this problem. This project is aimed at finding the difficulties in current cloud platform when facing mobile service, evaluates these problems, and designs new structures or algorithmes to solve them. We will finally design a system framwork --- a new mobile service infrastructure, that will add some functionalities to deliver or enable new services and applications on mobile device.

Sujit Dey, University of California, San Diego
Adaptive Wireless Multimedia Cloud Computing to Enable Rich Internet Gaming on Mobile Devices

With the evolution of smart mobile devices, and their capability and rapid adoption for Internet access, there is a growing desire to enable rich multimedia applications on these mobile devices, like 3D gaming and augmented reality, internet and social networking video, and enterprise applications like tele-medicine and tele-education. However, despite the progress in the capabilities of mobile devices, there will be a widening gap with the growing computing/power requirements of emerging Internet multimedia applications, like multi-player Internet gaming. We propose a Wireless Multimedia Cloud Computing approach, in which the computing needs of the rich multimedia applications will be performed not by the mobile devices, but by multimedia computing platforms embedded in the wireless network. We will address wireless network scalability and Quality of Experience achieved, besides computing scalability and energy efficiency, to make the proposed approach user acceptable and economically viable. Specifically, we will have to consider the challenges of the wireless network, like bandwidth fluctuations, latency, and packet loss, while satisfying stringent QoS requirements of real-time and interactive multimedia applications, like very low response times needed in multi-player gaming. Besides, we will have to ensure very high scalability in terms of the number of mobile users that can be supported simultaneously, which can be challenging considering the high computation needs multimedia tasks like 3D rendering can impose on the wireless cloud computing resources, and the high bandwidth requirements of delivering multimedia data from the wireless cloud servers to the mobile devices. To address the above challenges, we propose a new Wireless Cloud Scheduler, that will supplement current wireless network scheduling by simultaneously considering both the computing and network bandwidth requirements of the multimedia applications, and the current availabilities of the cloud computing resources and conditions of the wireless network. To ensure high mobile user experience, we will analyze how wireless network and cloud computing factors affect user experience, and develop a model to quantitatively measure the quality of experience in Wireless Multimedia Cloud Computing. We will develop adaptation techniques for multimedia applications like 3D rendering, augmented reality, and video encoding, which can enable up to 10 times reduction in the computing and network bandwidth requirements of the multimedia applications, while still ensuring acceptable mobile user experience. The resulting adaptive applications residing in the wireless cloud, and the user experience models, will be utilized by the wireless cloud scheduler to ensure high network and computing scalability, while enabling high mobile user experience. While the above approaches can enable a range of multimedia applications on mobile devices, in this proposal we will particularly focus on Cloud Mobile Gaming (CMG): enabling rich, multi-player Internet games on mobile devices. We will develop a research prototype of the CMG architecture, including a mobile gaming user experience measurement tool, application adaptation techniques, and a wireless cloud scheduler for mobile gaming. We will demonstrate the feasibility and scalability of the CMG approach by playing representative Internet games on mobile devices and netbooks on WiFi, 3G, and 4G wireless networks, with enhanced gaming user experience.

Li Xiong, Emory University
Enabling Privacy for Data Federation Services

There is an increasing need for organizations to built collaborative information networks using data federation services across distributed, autonomous, and possibly private databases containing personal information.  Two important privacy and security constraints need to be considered: 1) the privacy of individuals or data subjects who need to protect their sensitive information, and 2) the security of data providers who need to protect their data or data ownership.  A fundamental research question is: what models correctly capture the privacy risk in the data federation setting?  Simply combining the privacy models in single provider settings and security notions in secure multi-party computation settings is not sufficient in addressing the potential privacy risks for data federations.  In addition, few works have taken a systems approach for developing privacy-enabled data federation infrastructure.

The objective of this proposal is to develop a privacy-preserving data federation framework that includes:
1) rigorous and practical privacy notions and mechanisms for privacy protection of data across multiple data providers,
2) scalable and extensible architecture and middleware that can be easily deployed for privacy-preserving data federations.

 

The proposed research will reexamine the state-of-the-art privacy notions based on both adversarial privacy and differential privacy in the distributed setting, adapt and design new models and efficient algorithms to address potential risks.  The research will build upon the PI's previous work on next-generation distributed mediator-based data federation architecture and secure query processing protocols.  The intent is to demonstrate the viability of integrating the state-of-art privacy protection and secure computation technology components into an advanced and extensible infrastructure framework for seamless privacy preserving data federation.

 

The PI and her students have unique qualifications with expertise and research experience in both data privacy and security and distributed computing to successfully complete the project.   

Riccardo Sisto, Politecnico di Torino
Fast and memory-efficient regular expression matching

Regular expression matching is known to be a crucial task in many packet processing applications. Especially when performing deep packet inspection, the need arises for matching in real-time more and more complex expressions. Intrusion detection systems and e-mail spam filters are among the most significant examples of applications highly based on regex matching of considerable complexity.

Thanks to the efficiency of the Deterministic Finite Automata (DFA) model in terms of execution time on purely sequential machines and limited usage of bandwidth toward main memory, all the most recent software approaches for regex matching focus on representations derived from DFA, although enriched with different techniques in order to limit the well-known state explosion problem of DFA in case of complex patterns.

In this project we propose an orthogonal approach which is based on Non-deterministic Finite Automata (NFA) , which are well-known for well-behaving in terms of memory occupancy even with very complex pattern sets.

The main drawback of directly using the NFA form for regex matching stands in the need to explore multiple paths in parallel in order to guarantee an adequate processing speed. The leading idea of this project is that processing architectures with an high degree of parallelism and very high memory bandwidths could be exploited in order to efficiently match regex directly on their NFA representation, thus avoiding memory explosion.

This project aims at validating this idea by implementing an NFA-based pattern matching software engine based on processors with architectures characterized by high parallelism and abundant memory bandwidth, such as modern Graphics Processing Units or other massive multicore architectures with such characteristics.

Finally, a mainstream architecture such as the Intel x86 will also be evaluated, by investigating whether its special vector-based instruction set (e.g. SSE) is suitable for executing NFA at high speed on commodity hardware.

Joy Zhang, Carnegie Mellon University
Behavior Modeling for Human Network

The future usage of mobile devices and services will be highly personalized. The advancement of sensor technology makes context-aware and people-centric computing more promising than ever before. Users will incorporate these new technologies into their daily lives, and the way they use new devices and services will reflect their personality and lifestyle. This opportunity opens up the door for novel paradigms such as behavior-aware protocols, services and applications. Such applications and services will look into user behavior and leverage the underlying patterns in user activity to adapt their operations and have the potential to work more efficiently and suit the real needs of the users. While recording all mobile sensor information is trivial, effectively understand, infer and predict user's behavior from the heterogenous sensory data still poses challenges for researchers and industry professionals.

We propose a novel approach to the study of human behavior. By converting the heterogenous sensory data into a one dimensional behavior language representation, we apply statistical natural language processing algorithms to index, retrieve, cluster, summarize and to infer high level meanings of the recorded multi-sensor information about users' behavior.

We will also investigate using social networks to infer the social behavior of certain user groups to tailor services for these groups. This research will advance our understanding of human behavior and will provide a framework that can be used to guide both future research and engineering in human centered computing.

Ramesh Govindan, University of Southern California
Towards Predictive Control for Grid-Friendly Smart Buildings

Commercial and residential buildings consume the largest portion of electricity generated in the U.S.  Supply-side methods to deal with increasing peak loads requires the addition of new power plants, reduced efficiency at off-peak hours, and higher energy costs.  A promising alternative approach to managing peak demand electricity consumption is demand response and demand-side management, which can attenuate the peak load on the electricity power grid and power generation.  This can slow the rate of required infrastructure investment, improve efficiency and keep costs manageable.  Therefore, there is great interest in developing demand response control systems to reduce the peak relative to the base load.  This proposal seeks to explore the interplay between two technologies in order to achieve efficient demand-side management: spatially-dense near real-time environmental measurements provided by a wireless sensor network, and a model-based predictive control (MPC) system which uses a building model is derived from these measurements.

Edward Knightly, William Marsh Rice University
Robust High-Performance Enterprise Wireless: Inference, Management, and Cooperation


Our completed project yielded three key outcomes: (i) expanded deployment of the TFA Rice Wireless programmable urban WiFi network to a 3 km2 area and a user population exceeding 4,000, (ii) an analysis and solution to the multi-hop wireless routing problem, and (iii) an analysis and solution to the problem of congestion control over imbalanced wireless channels. We propose the following project for the next research phase. First, we will develop a network inference and management toolkit to address network diagnostics and to answer "what-if" queries. To achieve this, we will correlate easy-to-collect average statistics at each node into an inference framework that can disentangle complex interactions among nodes. In particular, we will use on-line measurements to assess how topology, channel conditions, and traffic demands have yielded the current network state and use a combination of on-line measurements, protocol rules, and analytical models to develop highly realistic predictions of how to move the network from an under-performing network state to one that meets the network manager's performance objectives. Second, we will design and implement a MAC protocol that enables multi-node cooperation to form a robust "virtual MIMO" array. We have already demonstrated in-lab that two nodes (such as a user's laptop and WiFi phone) can employ symbol-level cooperation to obtain a dramatic increase in channel robustness. We will explore protocol design for dense enterprise deployments and develop techniques to automatically ensure high robustness and resilience to fading and interference for management traffic and priority traffic. Throughout this project, we will employ an experimental research methodology utilizing both our in-building custom FPGA platform and our outdoor urban WiFi deployment.

Bernd Girod, Stanford University
Robust Video Streaming over IP: Adaptive Transport and Its Instrumentation


Typical IPTV solutions today combine multicast with local retransmissions of IP packets from a Retransmission Server (RS). The RS also supports fast channel change through unicast bursts that provide rapid synchronization of key information and supply an initial video stream. Building upon our ongoing collaboration with Cisco in this area, we propose to investigate advanced video transcoding schemes for fast channel change and error repair. We are particularly interested in the optimal placement of random access points in the transcoded bitstream and multicast-stream-aware transcoding that minimizes quality degradations from splicing of unicast and multicast streams.

George Porter, University of California, San Diego
Efficient Data-intensive Scalable Computing

Data-intensive Scalable Computing (DISC) is an increasingly important approach to solving web-scale problems faced by computing systems that answer queries based on hundreds of TB of input data. The purpose of these systems can be broad-ranging, from biological applications that look for subsequence matches within long DNA strands to scientific applications for radio astronomers that process deep space radio telescope data.  The key to achieving this scale of computing is harnessing a large number of individual commodity nodes.  Two key unaddressed challenges to building and deploying DISC systems at massive scale is per-node {\em efficiency} and cross-resource {\em balance} as the system scales.  While DISC systems like  Hadoop scale to hundreds or thousands of nodes, they typically do so by sacrificing CPU, storage, and especially networking efficiency, frequently resulting in unbalanced systems in which some resources are largely unused (such as the network).  The goal of this proposal is to design DISC systems that focus on efficiency as a first-class requirement.  We seek to develop programming models and tools to build {\em balanced systems} in which the key resources of networking, computing, and storage are maximally utilized. Since real systems will exhibit heterogeneity across nodes, we will explore techniques to flexibly deploy parts of the application accordingly.  Lastly, we will provide fault tolerance while maintaining efficient use of computing and storage resources.  This work will initially be set in the context of building TritonSort, a staged, pipeline-driven dataflow processing system designed to efficiently sort the vast quantities of data present in the GraySort benchmark challenge.

Faramarz Fekri, Georgia Tech
Robust Video Streaming over IP: Adaptive Transport and Its Instrumentation

The objective of this research is the adaptive streaming of ubiquitous video to consumers with access to a variety of platforms including the television, PC, and mobile handset over network paths with diverse characteristics with the goal of providing a high quality seamless experience. With the emergence of multimedia applications, demand for live and on-demand video services and real-time multi-point applications over IP has increased. Viewers access the video from various devices and networks with varying conditions. Since multicasting, in general, requires the commercial provider's support, we have to adopt unicast on the access network. However, transferring large volumes of live content (TV) via redundant unicast transmission of the same content to multiple receivers is inefficient. One alternative is to use multicast inside the core, but do unicast on the access network. This proposal presents an adaptive scheme for layered video transport to viewers with the flexibility to access the stream from various devices and heterogeneous networks. Our solution applies modern elastic FEC to H.264/SVC at the source enabling partial transport of video layers in a new architecture involving distributed delivery nodes (caches). The use of caching is to enable scaling the system when multicasting to large number of viewers is not an option. This new architecture for live content transport is inspired by a non-trivial advantage in using the scalable video stream in conjunction with elastic FEC in a distributed cache environment. It has the potential to solve for the challenging problem of video streaming which is easy-to-implement and easy-to-deploy and at the same time adapts to the specific needs of heterogeneous viewers, solves the issue of packet loss and congestion, accommodates both live content transport and on-demand video unicast and, especially, requires less bandwidth for the same quality-of-service in live content delivery to populations. The new approach offers other benefits such as robustness, fairness, bandwidth saving, load balancing, and coexistence with inelastic flows that we are interested in investigating further.

Gerald Friedland, International Computer Science Institute
Multimodal Speaker Diarization

Speaker diarization is defined as the task of automatically determining ``who spoke when", given an audio file and no prior knowledge of any kind other than the binary format of the sound file. The problem has been investigated extensively in the speech community and NIST has conducted evaluations in order to measure the accuracy of diarization for the purpose of Rich Transcription, i.e. speaker diarization as a front-end to ASR. Recent results, however, have shown that speaker diarization is useful for many other applications, especially for video navigation. However, current speaker diarization systems seem not robust enough to be used for the navigation of arbitrary video content, as they require intensive domain-specific parameter tuning (discussions are presented in). Traditionally, however, the speaker diarization has been investigated only using methods from speech processing and visual cues have rarely been taken into account. Initial results, indicate that adding the visual domain can increase the accuracy and robustness of speaker diarization systems.

We therefore propose to conduct systematic research on how audiovisual methods may improve the accuracy and robustness of speaker diarization systems.

Alan Bovik, University of Texas
Objective Measurement of Stereoscopic Visual Quality

There is a recent and ongoing explosion in 3D stereoscopic displays including cinema, TV, gaming, and mobile video. The ability to estimate the perceptual quality of the delivered 3D stereoscopic experience is quite important to ensure high Quality-of-Service (QoS). Towards this end we propose to develop principled metrics for 3D stereoscopic image quality, create quantitative models of the effects of distortions on the perception of 3D image quality, and develop Stereo Image Quality Assessment (IQA) algorithms that can accurately predict the subjective experience of stereoscopic visual quality. To achieve this we will conduct psychometric studies on a recently constructed database of high-resolution stereo images and distorted versions of these images. We have precision lidar ground truth for these stereo pairs. Based on these studies, which will reveal perceptual masking principles involving depth, we will develop methods for estimating the degradation in quality experienced when viewing distorted stereoscopic images. By combining quantitative 3D perception models with new statistical models of naturalistic 3D scenes that we have discovered under an NSF grant, including the bivariate statistics of luminance and depth in real-world 3D scenes, we shall create a new breed of perceptually relevant, statistically robust Stereo IQA algorithms. We envision that our Stereo IQA algorithms will achieve a much higher degree of predictive capability than any existing methods. More importantly, we will develop Stereo IQA algorithms that are blind or No Reference, which, to date, remains a completely unsolved problem.

Xavi Masip, Universitat Politecnica de Catalunya
Sponsorship for the 9th International Conference on Wired/Wireless Internet Communications (WWIC'11)

The 9th International Conference on Wired/Wireless Internet Communications (WWIC2011)will be organized by the Technical University of Catalonia (UPC) in Barcelona, Spain, next June 2011.

The conference is traditionally sponsored by Ericsson and Nokia and even in the 2006 edition was sponsored by Cisco as well.

Cisco is participating in the conference serving in the TPC and as keynote speaker.

Being a WWIC patron, Cisco will benefit from wide visibility in conference fliers, CFP, news, conference proceedings, etc, as well as from 2 free registrations and a keynote talk on an open topic.

Robert Beverly, Naval Postgraduate School
Transport-Layer Identification of Botnets and Malicious Traffic

Botnets, collections of compromised hosts operating under common control, typically for nefarious purposes, are a scourge. Modern botnets are increasingly sophisticated and economically and politically motivated. Despite commercial defenses and research efforts, botnets continue to impart both direct and indirect damage upon users, service providers, and the Internet at large.

We propose a unique approach to detecting and mitigating botnet activity within the network core via transport-level (e.g. TCP) traffic signal analysis. Our key insight is that local botnet behavior manifests remotely as a discriminative signal. Rather than examining easily forged and abundant IP addresses, or attempting to hone-in on command-and-control or content signatures, our proof-of-concept technique is distinct from current practice and research. Using statistical signal characterization methods, we believe we can exploit botnets' basic requirement to source large amounts of data, be it attacks, spam or other malicious traffic. The resulting traffic signal provides a difficult-to-subvert discriminator.

Our initial prototype [3] validates this framework as applied to a common botnet activity: abusive electronic mail. Abusive email imparts significant impact on both providers [2] and users. Not only did this prototype yield highly promising results without reputation (e.g. blacklists, clustering, etc.) or content (e.g. Bayesian filters) analysis, it has several important benefits. First, because our technique is IP address and content agnostic, it is privacy-preserving and therefore may be run in the network core. Second, in-core operation can stanch malicious traffic before it saturates access links. Third, by exploiting the root-signature of the attacks, the traffic itself, botnet operators must either acquire more nodes or send traffic more slowly, either of which imparts an economic cost on the botnet.

The fundamental technical approach of our prototype, treating the traffic signal as a lowest-level primitive, without relying on brittle heuristics or reputation measures, has the potential to generalize to many security contexts. We propose to investigate using transport-traffic analysis in new contexts and in conjunction with additional data to perform multi-layer correlation. Of particular interest is the dual of the abusive traffic problem, identifying the scam and botnet hosting infrastructure to avert phishing and malware propagation attacks. Longer-term, we believe the method can benefit service providers by detecting and reacting automatically to third-party congestion experienced by their customers, whether induced by faults, overloads, or attacks.

Molly O'Neal, Georgia Tech
GTRNOC Convergence Innovation Competition

The Convergence Innovation Competition (CIC) is an annual event produced by the GT-RNOC each Spring. This unique competition is open to all Georgia Tech students and focuses on innovation in the areas of Converged Services, Converged Media, Converged Networks, and Converged Client and Server platforms and environments. This includes the convergence of services (ex: maps + blog + phone), service delivery platforms (eg: Internet+IMS), access technologies (3G + WiFi + IPTV), and client platforms (eg: smartphone + set top box + laptop) as well as smart grids, smart homes, and connected vehicles. Participants are supported by GT-RNOC research assistants who are domain experts in a variety of relevant areas, and are given access to a wide variety of resources contained in the Convergence Innovation Platform.  The goal of this competition is to develop innovative convergence applications. While the primary focus is on working end-to-end prototypes, the applications and services developed also need to be commercially viable with a strong emphasis on the user experience.

Ananthanarayanan Chockalingam, Indian Institute of Science, Bangalore
Transmitter/Receiver Techniques for Large-MIMO Systems

This research proposal is aimed at investigating novel architectures and algorithms for large-MIMO encoding (at the transmitter) and large-MIMO detection and channel estimation (at the receiver) at practically affordable low complexities. The approach we propose to pursue to address the large-MIMO detection and channel estimation problem is to seek tools and algorithms rooted in machine learning/artificial intelligence (ML/AI). The motivation for this approach is that, interestingly, certain algorithms rooted in ML/AI (e.g., greedy local search, belief propagation) show increasingly closer to optimum performance for increasingly large number of dimensions at low complexities. The proposed research will investigate such algorithms for suitability and adoption in large-MIMO detection and channel estimation. Also, large-MIMO encoding techniques with no channel state information at the transmitter (CSIT) and partial/limited CSIT that can render the complexity at the receiver low would be investigated. The approach we propose to pursue to address the large-MIMO encoding at the transmitter is by way of designing space-time block codes (STBC) which are amenable for the detection techniques discussed in the above and/or designing STBCs that admit fast sphere decoding. Wireless mesh and WiFi networks (e.g., IEEE 802.11ac), where Cisco is a key player, can significantly benefit from the high spectral efficiency large-MIMO approach.

Yao Wang, Polytechnic Institute of New York University
Video Coding/Adaptation and Unequal Error Protection Considering Joint Impact of Temporal and Amplitude Resolution

Both the wired and wireless networks are highly dynamic with time and location varying bandwidth and reliability. A promising solution to tackle such variability is by using scalable video coding (SVC), so that the video rate can be adapted according to the sustainable bandwidth, and different video layers can be protected against packet loss according to their visual importance. A scalable video is typically coded into many spatial, temporal and amplitude (STA) layers. A critical issue in SVC adaptation is how to determine which layers to deliver, to maximize the perceptual quality, subject to the total rate constraint at the receiver. A related problem is how to allocate error protection bits over the chosen layers. To solve such problems, one must be able to predict the perceptual quality and bit rate of a video under any combination of spatial, temporal, and amplitude resolutions (STAR). (Note that each particular set of STA layers corresponds to a specific combination of STAR.) Although there has been much work in perceptual quality modeling of compressed video, most of them considered video coded at  fixed spatial and temporal resolutions. Similarly, prior work on rate modeling and rate control has mostly focused on the impact of quantization stepsize (QS) only. Most of the recent work in SVC adaptation involving frame rate and frame size is quite adhoc, not based on analytical quality and rate models. Recently we have developed accurate and yet relatively simple models relating the perceptual quality and rate, respectively, with the frame rate and QS. We are currently working on extending these models to consider spatial resolution as well. We further considered how to determine the optimal frame rate and QS to maximize the perceptual quality subject to the rate constraint for a single video. The same principle can be employed to optimize either real-time video encoding or adaptation of pre-coded SVC video, based on feedback about sustainable end-to-end bandwidth.  We propose to collaborate with CISCO researchers in the following related research areas: 1) Rate allocation and video adaptation among multiple videos to avoid congestion at bottleneck links in the Internet, 2) Unequal error protection across STA layers to optimize the received video quality under a total rate constraint, and 3) Wireless video multicast with fair and yet differentiated quality of service. All these work will make use of our quality and rate models, to optimize the perceptual quality (either at the single or multiple recipients) subject to a certain rate constraint.

Yuanyuan Zhou, University of California, San Diego
Fast Failure Diagnosis and Security Exploit Detection  via Automatic Log Analysis and Enhancement


Complex software systems inevitably have complex failure modes: errors triggered by some combination of latent software bugs, environmental conditions, administrative errors and even malicious attacks by exploiting software vulnerabilities. Very often, log files are the sole data source available for vendors to diagnose reported failures or analyzing security exploits. Unfortunately, it is usually difficult to  conduct deep inference manually from massive amount of log messages. Additionally, many log messages in real-world software are ad-hoc and imprecise, and provide very limited diagnostic information, still leaving a large amount of uncertainty regarding what may have happened during a failed execution or an attack.

To enable developers to quickly troubleshoot software failures and shorten system downtime, we propose automatic log inference and informative logging to make real-world software more diagnosable with regards to failures and security exploits.Specifically, our research consists of the following thrusts (the first two related to general failure diagnosis and the third to security):

(1)Practical and scalable automatic log inference for troubleshooting production failures. We will investigate techniques and build a tool that combine innovations from program analysis and symbolic execution to analyze run-time logs and source code together to infer what must/may have happened during the failed execution. The inferred information can help software engineers significantly reduce the diagnosis time and quickly release patches to fix problems.

(2)Automatic log enhancement to make software easy-to-diagnose. We will investigate ideas and design static analysis tools to automatically enhance log messages in software to collect more causally-related information for failure diagnosis.  Specifically, we will explore the following two steps to make logging more effective for diagnosis: (i)We will build a tool that can automatically enhance each existing log message in a given piece of software to collect additional causally-related information that programs should have captured when writing log messages. Such additional log information can significantly narrow down the amount of possible code paths for engineers to examine to pinpoint a failure's root cause. In brief, we enhance log content in a very specific fashion, using program analysis to identify which state should be captured at each log point (a logging statement in source code) to minimize causal ambiguity. (ii)We will explore a methodology and also build a tool to systematically analyzes logs' code coverage and identifies the best program locations in software code to collect intermediate diagnostic information in an informative (maximally removing diagnosing ambiguity) and efficient (low overhead) way.

(3)Log-Inference and Log-Enhancement for security exploits detection and vulnerability analysis. As many crash failures (e.g. those caused by buffer overflows, etc) can be exploited by malicious users to launch attacks and hijack systems, quickly detecting them and identifying the vulnerable code locations would be very critical to ensure security.  Such vulnerabilities are usually different from other general failures since in most of cases programmers do not anticipate the crashes, so they wouldn't know to place logging statement at those faulty points. So the questions are: (1) How existing log messages can be leveraged to detect security exploits? and (2) how to do better logging to facilitate intrusion detection and vulnerability analysis?

First, we can apply similar log-code inference to help detecting code vulnerability in case of crash failures (e.g. those introduced by buffer overflows, etc) to understand what code path must the execution taken, and what are the key variable values. Such information can help quickly narrowing down the root case and identifying vulnerable code locations. Also we can enhance existing log messages or add additional log messages to make intrusion detection and forensics analysis easier and faster.

Another interesting idea to explore, unique for detecting security exploits, is to learn what are the normal logging message sequences (either logically by analyzing source code or statically by learning from testing runs), and then at run time, monitor for the stream of log messages to detect anomalies that deviates from the normal behaviors.  For example, from program logic, a "return" statement should return back to the function caller, so there is some natural order (or sequence) in log messages. If this return is hijacked, the program execution flow would not follow the natural logic order. Instead, it will jump into a different code location, and produce a totally different log message order.

The challenge is then how to add logs without incurring large overhead.   One possible solution is to find minimum recording by eliminating those that can be inferred automatically. Also we can store it temporarily in memory and output it only periodically (e.g. every 10 seconds).  To save space, we can also compress old logs or keep only those high-level log summary.         

Nelson Morgan, The International Computer Science Institute
Manystream Automatic Speech Recognition

We will study the expansion of the speech recognition front end signal processing to many feature streams (using Gabor filtering and MLP feature transformation in the Tandem framework) in order to improve performance under common acoustic conditions. We will focus on experiments with single distant microphone data from meetings, using the roughly 200 hours of speech from ICSI, NIST and AMI corpora. If it is available, we will also want to incorporate data from Flip Video recordings from Cisco, which will provide a significant test of the generalization of our methods. An improved system will also be compared with the now-standard AFE approach (ETSI 2007), and with model adaptation methods, as well as with some combinations with some of these approaches. In particular, we will want to learn the tradeoffs between increasing the data size and model complexity vs. using increased computation for an expanded acoustic front end. This experimental work will be accompanied by an analytic component to learn what each of these systems is doing.

Faramarz Fekri, Georgia Tech
Understanding and Improving the Human Network (open call)

The goal of this research is to improve human networks in collaborative information exchange sites. Examples of such networks are Expert Systems, Community Question Answering and, in general, the expert-services social networks in which the members exchange information as a service and possibly make profits. These networks are becoming a popular way of seeking answers to information needs for millions of users. In such networks, the online transaction (i.e., information exchange) most likely takes place between individuals who have never interacted with each other. The consumer of the information often has insufficient information about the service quality of the service provider before the transaction. Hence, the service recipient should take a prior risk before receiving the actual service. This risk puts the recipient into an unprotected position since he has no opportunity to try the service before he receives it. This problem gives rise to the use of reputation systems in which reputations are determined by rules that evaluate the evidence generated by the past behavior of an entity within a protocol. Therefore, the quality of these social networks as well as the dynamic of their connectivity depend heavily on the reputation mechanism. This research proposes, for the first time, the application of the belief propagation in the design of reputation systems. It represents a reputation system as a factor graph, where the providers and consumers of information service are arranged as two sets of variable and factors nodes. Then, it introduces a fully iterative message passing algorithm based on belief propagation to solve for reputations. The preliminary results indicate the superiority of the belief propagation scheme both in terms of computational efficiency and robustness against (intentional/unintentional) malicious behaviors. Thus, the proposed reputation system has the potential of improving both the human-centric performance and ad-hoc information-centric human networks.

Bharat Bhuva, Vanderbilt University
Logic Designs for In-Field Repair and Failure mitigation

Arrayed circuit designs (SRAMs or FPGAs) use redundant circuits to repair failed cells in the field.  Unlike SRAM designs, where all cells are identical and adding spare cells is not difficult, logic circuits do not have easily identifiable designs that can be added as universal spares.   For ASIC designs, the brute-force approach of duplicating entire logic blocks is too expensive and impractical for implementation.  This proposal is for the development of after-the-fabrication, in-field repair approaches for logic circuits that allow for the replacement of faulty transistors (or logic gates) without affecting the overall functionality of the IC.  The proposed approach is to use over- and under-approximations of the original logic circuit that can provide redundant minterms and maxterms that can be introduced in the circuit to prevent failed transistors from affecting the circuit operation.  The approximations must be designed such that they provide maximum coverage and incur minimal overhead.  We propose to develop algorithms and design techniques that can be integrated into the existing design flow for Cisco and their vendors to facilitate in-field repair of logic circuits.

Mark Zwolinski, University of Southampton
Logic In-Field Repair Research

This proposal is made in response to RFP-2010-063. Conventional fault-tolerance implies replication of hardware and thus overheads of 100% or worse. Regular fabrics, such as FPGAs, may allow reconfiguration, but such schemes commonly omit reference to the control and monitoring mechanisms. In earlier work, we have proposed a totally self-checking controller architecture. In this proposal, we intend to investigate whether such a structure can be used with a much more limited form of error checking in order to manage such errors. In particular, we wish to extend our scheme of self-checking controllers to monitor the state of condition signals that communicate the state of a datapath to the controller. By verifying the effectiveness of a limited monitoring scheme, we will also seek to discover if a more comprehensive fault-monitoring technique can also be used. Hence, this would allow a designer to make an informed decision about the degree of self-checking and repair to include in a system.

Adam Dunkels, Swedish Institute of Computer Science
IP Based Wired/Wireless Sensor/Actuator Networks

IPv6-based sensor networks are emerging, but application-layer access and service discovery are still open questions and the interactions with low-power radio duty cycling mechanisms are still poorly understood. In the Contiki operating system, Cisco and SICS have previously developed a low-power IPv6 Ready network stack. In the IPSN project, we develop a REST-based application-layer access infrastructure on top of the underlying low-power IPv6 network. To achieve application-layer access, we also develop a service registration and discovery layer. The results of IPSN allows low-power application layer access to the low-power IPv6 networks that will form the future Internet of things.

S. J. Ben Yoo, University of California, Davis
Optical Networking

Title:

Flexible Capacity and Agile Bandwidth Terascale Networking

Abstract:

We propose to investigate a novel networking technology based on flexible assignment and routing of spectral bandwidths rather than wavelengths or channels.  The proposed networks exploit a new class of 100 G~1 Tb/s transmitters and receivers with elastic bandwidth allocations, and a modified ROADMs and OXCs with spectral slicing capabilities.  The resulting flexible-bandwidth networking can achieve the following advantages:

* Dynamic optimization of elastic bandwidth allocations and configurations in the networks,
* Spectrally and energy efficient network operation by minimizing unnecessary transmissions,
* Bandwidth squeezed protection/restoration by reserving minimum bandwidths in case of failure,
* Flexibly accommodating very small (subwavelength) and large (hyperwavelength) flows/circuits,
* Simplified inventory of one kind of flexible-linecards rather than many different fixed-linecards,
* Seamlessly interfacing optical networks to wireless or wireline networks, while supporting future services,
* Accommodating all formats and protocols including OFDM, OTN, and other schemes,
* Compared to SLICE and VCAT methods pursued at NTT, Infinera, and other schemes, the proposed schemes are most versatile and exploits innovative integrated technologies developed under previous CISCO and DARPA support.

We propose to investigate theoretical and experimental studies of the above aspects of the innovative flexible capacity networking under CISCO support through setting up the world's first flexible capacity networking testbed.

Li Chen, University of Saskatchewan
Logic In-Field Repair Research

This proposal introduces a built-in self-repair (BISR) technology to address the logic in-field repair. We propose a block-based logic repair method, taking into the consideration of regularity of random logic. The reconfigurable elements can be gates, adders, multipliers, or even higher level components. 4 or 8 elements can be grouped into one block, for example 4 functional elements including one as backup. There are switch matrix for input of the blocks to select the corresponding elements. Most of the faults including stuck-at faults and open-circuits faults can be isolated using this switching method. Built-in self-test (BIST) is to be performed for each local logic blocks to detect the faults during every power-up event. A test pattern is applied to the reconfigurable block by a self-test scan chin, which is administrated by a central unit. Current sensing techniques can be combined into the scan chain for fault detection. This method can be expected to have a hardware overhead of 10-20% if large blocks are used for switching. 

Sang H. Baeg, Hanyang University
Logic In-Field Repair Research

Major reliability issues such as SER, NBTI, and VDDmin are increasingly difficult to detect and diagnose in emerging technologies. The proposed research work is to develop innovative way of designing flip-flops to detect and diagnose such reliability faults in flip-flop elements. In the proposed method, the noise margin can be programmed for a subset of flip-flop elements; it establish targeted reliability window to selectively screen the reliability level of corresponding flip-flops. A novel method of grouped flip-flop (GF) is proposed as the key structure to effectively implement the programmable noise margin for GF. The research activity includes re-designing the scan logic in GF, performing physical structure design of GF, finding test pattern dependency, and determining the programmable noise margin range with the N flip-flops in GF.

Olivier Bonaventure, Université catholique de Louvain
Preserving the reachability of LISP xTRs

The main focus of this proposed research is to develop techniques to allow Ingress and Egress Tunnel Routers (xTR) that are using the Locator/Identifier Separation Protocol (LISP) to deal efficiently with failures. We will focus on LISP sites that are equipped with at least two xTRs.

Our first objective is to develop cache synchronisation techniques to allow the ITRs of a site to maintain synchronise mapping caches in order to ensure that the site's outgoing traffic is not affected by the failure of one of its ITRs.

Our second objective is the development of techniques that allow LISP-encapsulated traffic to be preserved even when one of its ETR becomes unreachable.  We will evaluate the performance of these techniques by using trace-based simulations and lab experiments with LISP-click.

Sonia Fahmy, Purdue University
Routing and Addressing

Detecting and preventing routing configurations that may lead to undesirable stable states is a major challenge. We propose to (1) design and develop a prototype tool to detect BGP policy interactions based on a mathematical framework, and (2) apply this prototype tool to our unique datasets obtained from multiple ISPs.  We will identify the conditions under which unintended or problematic routing states can occur, and develop algorithms to detect these conditions.  The goal of our study is not only to show potential problems that may occur from current configurations, but also study the minimum information ISPs must exchange to confirm safe global policy interactions.

Alan Bovik, University of Texas
Video (Open call)

The next generation of wireless networks will become the dominant means for video content delivery, leveraging rapidly expanding cellular and local area network infrastructure. Unfortunately, the application-agnostic paradigm of current data networks is not well-suited to meet projected growth in video traffic volume, nor is it capable of leveraging unique characteristics of real-time and stored video to make more efficient use of existing capacity. As a consequence, wireless networks are not able to leverage the spatio-temporal bursty nature of video. Further, they are not able to make rate-reliability-delay tradeoffs nor are they able to adapt based on the ultimate receiver of video information: the human eye-brain system. We believe that video networks at every time-scale and layer should operate and adapt with respect to perceptual distortions in the video stream.

In our opinion the key research questions are:

* What are the right models for perceptual visual quality for 2-D and 3-D video?
* How can video quality awareness across a network be used to improve network efficiency over short, medium, and long time-scales?
* How can networks deliver adaptively high perceptual visual quality?

We propose three research vectors. Our first vector, Video Quality Metrics for Adaptation (VQA), focuses on advances in video quality assessment and corresponding rate distortion models that can be used for adaptation. Video quality assessment for 2-D is only now being understood while 3-D is still largely a topic of research. Video quality metrics are needed to define operational rate-distortion curves that can be implemented to make scheduling and adaptation decisions in the network. Our second vector, Spatio-Temporal Interference Management (STIM) in Heterogeneous Networks, exploits the characteristics of both real-time and streamed video to achieve efficient delivery in cellular systems. In one case, we will use the persistence of video streams to devise low overhead interference management for heterogeneous networks. In another case, we will exploit the buffering of streamed video at the receiver to opportunistically schedule transmissions when conditions are favorable, e.g., mobile users are close to the access infrastructure. Our third vector, Learning for Video Network Adaptation (LV), proposes a data-driven learning framework to adapt wireless network operation. We will use learning to develop link adaptation algorithms for perceptually efficient video transmission. Learning agents collect perceptual video quality distortion statistics and use this information to configure cross-layer transport and interference management algorithms. At the application layer, we propose using learning to predict user demand, which can be exploited to predicatively cache video and smooth traffic over the wireless link.