Mathematics and Computer Science, Emory University Modeling User Interactions in Social Medium
Sponsors: Sateesh Addepalli, Debo Dutta
The goal of this project is to model user interactions and content generation in social media, focusing on loosely organized collaborative information sites such as Community Question Answering (CQA) or community Wikis. CQA sites such as Yahoo! Answers, Baidu Knows, and Naver have emerged as a popular means of seeking answers to complex information needs for millions of users. The interactions on these sites tend to differ from "traditional" online social networks: CQA sites focus on information exchange and not on maintaining social ties. Furthermore, the content and the nature of the interactions depend heavily on the topic or domain, the culture or subculture of the community, the ways that a question is posed, temporal characteristics of the interactions, and other factors. We are developing robust, fine-grained, and temporally sensitive models of user interactions to predict and explain content generation and content access for collaboratively generated content. Our models would allow accurate simulation and measurement of user-centric performance of dynamic and ad-hoc information-centric human networks, and help guide the development of the middleware to make applications over these networks more effective.
Computer Science, University of California, Santa Barbara TCP Santa Barbara: A Rate-Based Congestion Control Algorithm
Sponsors: Fred Baker
This proposal is based on a previous Summer Internship at Cisco. The focus of this project is a modification of the TCP protocol to improve performance over heterogeneous networks. TCP Santa Barbara is a sender side rate-based modification of TCP aimed at more efficient use of the available bandwidth than other TCP variants such as Reno and New Reno. Our algorithm calculates the sending window based on the rate at which packets are acknowledged, and attempts to smooth out the "sawtooth" behavior that is characteristic of Reno and New Reno, replacing this with a fast and smooth reaction to changes in available bandwidth.
School of Computer Science, University of Oklahoma Evaluation of Saratoga for High Speed Data Transfer
Sponsor: Lloyd Wood
Saratoga is a file transfer protocol that runs over UDP and is based on sending data at a rate independent of the acknowledgement rate from the receiver. Loss recovery is based on reduced periodic acknowledgements from the receiver that are sent in response to queries from the sender. Although Saratoga has been used to download data over dedicated links in operational satellites, the performance of the protocol in terms of various protocol parameters, such as acknowledgement period, link error, and link delay, have not been investigated. The objective of this project is to develop a simulator for Saratoga to allow a systematic and thorough evaluation of its performance, and develop appropriate congestion control schemes to enable its future use over terrestrial and disruption/delay tolerant shared links. The simulator will allow dimensioning of the network and links, such as the optimum link buffer size for given acknowledgement periods and link errors. The simulation results will be generalized by analytical performance models of the Saratoga protocol.
Département d'ingénierie informatique, Université catholique de Louvain Development of Routing and Addressing Architectures for the Internet
Sponsor: David Meyer
The first objective of this proposed URP is to pursue the development of our LISP implementation on the FreeBSD kernel. We believe that having a publicly available implementation of LISP, in addition to the implementation being developed by Cisco and tested by operators, will be a key contribution and the sparkle that will trigger a larger interest from the research community. It will allow other researchers to test, improve and evaluate this new proposal in the Internet or in testbeds such as GENI, VINI, emulab, etc. In addition to developing the implementation, we intend to deploy some LISP nodes to allow end-to-end experiments across the Internet, exploring issues and requirements of the mapping distribution protocol.
Our second objective within this URP is to complement this implementation by developing a mapping distribution system based on a DHT approach.
D.R. Cheriton School of Computer Science, University of Waterloo End-to-End Network Virtualization
Sponsor: Tsegereda Beyene
We propose to design a framework for network virtualization, so that multiple service providers (SPs) can dynamically compose customized end-to-end virtual networks (VNs) and deploy services as well as manage them by effectively utilizing underlying network resources leased from multiple infrastructure providers (InPs). In this respect, we will specifically address end-to-end virtual network resource allocation problem, and management issues in a network virtualization environment (NVE).
Our work on network virtualization will complement the ongoing VCI (Virtual Computing Infrastructure) initiative undertaken by other researchers in our school, who address server, storage and autonomic aspects of virtualization. Working in close proximity with them will enable us to exchange ideas on diverse aspects of virtualized computing, resulting in realizing a complete end-to-end virtualized environment.
Electrical and Computer Engineering, Duke University Optimization Methods and Tools to Support Fault-Insertion Technology for System Availability and Diagnostic Coverage Measurement
Sponsor: Xinli Gu
The goal of this project is to develop and practical fault-insertion (FI) technology for understanding the impact of physical defects and intermittent/transient faults on chip, board, and system operation in typical Cisco products. Quality assurance is a major problem for complex electronic systems. The identification of the root cause of a field failure is often difficult, or even impossible. Appropriate diagnostic techniques need to be developed to reduce the cost and time associated failure-cause identification. This project is aimed at a enabling a new technique for evaluating the effectiveness of the FI technology being developed at Cisco. The research results will also provide guidelines on the best way to implement FI, cost/benefit analysis, and adaptive diagnosis via learning, predictive analysis, and database generation.
Electrical and Computer Engineering, University of Delaware Beamforming for Backhaul in Outdoor IEEE 802.11n Networks
Sponsor: Douglas Chan
This is a continuation of a successful project on IEEE 802.11n beamforming. Our primary work in the past year has involved beamforming techniques for 802.11n links (AP-Client) in outdoor environments. In collaboration with RF and wireless communications experts in WNBU, we derived new findings related to feedback methods, quantization, Doppler, delay spread, and power control. This knowledge has not only been helping WNBU engineers to find innovative ways for adding value to their products, but has also resulted in papers presented at several conferences. During the coming year, we propose to focus on another major issue related to the outdoor deployment of 802.11n - AP backhaul.
Electrical and Computer Engineering, Purdue University Video Coding for Backwards-Compatible HD and Error-Prone Networks
Sponsor: Arturo Rodriquez
Most of the high-definition (HD) television material currently being transmitted is either in the 1920x1080-pixel 30 frames/second interlaced-scan format (referred to as 1080i30) or the 1280x720-pixel 60 frames/second progressive-scan format (referred to as 720p60). These two formats represent different choices and trade-offs. Currently deployed HD decoders will not be able to decode the forthcoming 1080p60 progressive format. Since there is not enough bandwidth available to transmit a program in two HD formats independently, there is a need to find a way to code a program in 1080p60 efficiently in a backwards-compatible manner. We propose to (i) develop three different scalable coding methods that address this problem, and (ii) investigate the effects of MPEG-2 versus AVC coding in the lower resolution video layer. We also propose investigating video coding methods that enable receivers with the capability for graceful adaptability to certain network impairments, and that offer other potential benefits.
Electrical Engineering and Computer Science, Stanford University Adaptive Error Measurement, Concealment, and Repair for IP Streaming Video
Sponsor: Dave Oran
Typical IPTV solutions today combine multicast with local retransmissions of IP packets from a Retransmission Server (RS). The RS also supports fast channel change through unicast bursts that provide rapid synchronization of key information and supply an initial video stream. Building upon our ongoing collaboration with Cisco on IPTV error recovery, we propose to extend the framework to fast channel change. In particular, we will explore the benefits of video transcoding at the RS and the possibility of shifting the unicast burst function to nearby peer set-top boxes receiving the same multicast. We hope to show that the number of set-top boxes supported by each Retransmission Server can be increased dramatically, without affecting the user experience.
Kenneth E. Goodson
Mechanical Engineering, Stanford University Thermal and Power Efficiency Engineering of Microprocessors and ASIC’s in Router Systems
Sponsors: Judy Priest
Router and switching systems face mounting challenges relating to power consumption, noise, and reliability. Power consumption is comprised of high Wattage to drive devices and an almost equal amount of power to remove heat. Much progress can be made by leveraging the improved thermal management technologies developed for the electronics industry over the past decade, which may enable significant reductions in fan power consumption and noise at projected chip power generation levels. This work quantifies these potential benefits by developing electrothermal simulations of chips and chip networks and by helping CISCO develop a systematic chip temperature measurement approach.
Department of Electrical and Computer Engineering, Polytechnic Institute of NYU Hardware acceleration of security filters on the Cisco SAMI platform to protect VoIP against denial of service attacks
Sponsors: Lillian Dai, Sateesh Addepalli
A 2005 report by the VoIP security alliance classified VoIP threats into five general categories namely, 1) interception and modification, 2) eavesdropping, 3) social engineering, 4) unintentional interruption, and 5) intentional interruption (VoIP-specific denial of service).
In this project, NYU-Poly will implement security filtering mechanisms in hardware using the Cisco Service and Application Module for IP (SAMI) to prevent VoIP against Denial of Service (DoS) attacks. To accomplish this overall goal, we will,
1. Understand the hardware architecture and programming interface of the Cisco SAMI card.
2. Implement SIP security filters onto the Cisco SAMI card.
3. Measure the effectiveness of the hardware implemented SIP security filtering mechanisms.
4. Build a small SIP signaling testbed for proof-of-concept experiments.
Edward W. Knightly
Electrical and Computer Engineering, Rice University Gateway-Driven Diagnostics and Performance Prediction for Mesh Networks
Sponsor: Santosh Pandey
Our completed project yielded three key outcomes: (i) expanded deployment of the TFA-Rice Wireless programmable mesh network to a user population exceeding 4,000, (ii) an analysis and contention-window policy solution to the TCP starvation problem, and (iii) a study of the impact of management overhead as network size scales.
We propose the following project for the next research phase. We will develop a network management toolkit to address network diagnostics and to answer "what-if" queries. Our key technique is to use a unique coupling of passive measurements obtained at the gateway coupled with analytical models. While no analytical model today can accurately characterize the behavior of a real-world mesh deployment, we will show that when seeded with on-line measurements, they can provide a new understanding of the reasons behind the network's current behavior and suggest corrective actions best suited to meeting the network operator's objectives. Throughout this project, we will employ an experimental research methodology utilizing our urban mesh network deployment.
Grenoble INP / TIMA Robust Logic and Flip-Flop Design
Sponsors: Adrian Evans
Optimizing SEU-resilient designs not only implies improving the process, adding redundancy or using hardened cells. One of the major challenges is to precisely identify what parts of a design should actually be protected to achieve a required failure in time (FIT) rate. This FIT rate is of course related to the soft error rate (SER) but must also take into account the whole system architecture and even, when possible, the characteristics of the application. Most current techniques either do not apply early in the design flow or are restricted to some particular circuits and architectures. Other limitations are related to the experimental durations and to the increasing complexity of the designs.
The goal of our project is twofold. On one hand, we propose to experiment new approaches to better evaluate the expected FIT rate actually meaningful from the user point of view. On the other hand, we propose the definition of a hierarchical compositional analysis approach able to cope with the system complexity. This work targets digital hardware/software systems on chip. The expected results are (1) a design flow methodology to better evaluate the actual FIT rate of an integrated system and (2) the capability to identify the most critical parts in the system. This second aspect would be more specifically supported by the CISCO research award.
Alex X. Liu
Computer Science and Engineering, Michigan State University GADI: Grammar Aware Deep Packet Inspection at Line Speed
Sponsor: Sailesh Kumar
Deep packet inspection is the key operation in many critical networking systems such as intrusion detection and prevention systems (IDSes/IPSes). Currently, deep packet inspection can only check for simple patterns that are typically specified as regular expressions. Unfortunately, regular expression-based deep packet inspection has several limitations. For example, important information such as the application protocol itself is usually ignored. This typically means each byte in a packet payload is treated identically. Carefully inspecting every byte at line speed is both difficult and unnecessary. Furthermore, the occurrence of a string pattern in one application field may indicate the presence of a virus while its occurrence in another application field may not. Thus, regular expression-based systems have difficulty satisfying the high accuracy requirements of IDSes/IPSes. To achieve more accurate and more efficient deep packet inspection, we propose to build a grammar aware deep packet inspection (GADI) system. This system will behave differently for different application protocols. Thus, we achieve more accurate deep packet inspection.
Computer Science, UNC Chapel Hill Data Structures to Support Analysis of Network Traffic using Sets and Bags at IPv6 and Beyond
Sponsors: Henry Stern, Chip Popoviciu
Sets and Bags (multisets) have proven to be very effective abstractions for analyzing large NetFlow datasets such as those captured by DoD for the military and US CERT for civilian government agencies. For IPv4, sets are easily implemented as sparse bit arrays. Bags can be implemented in a similar fashion, taking advantage of address locality to keep the practical storage requirements within a feasible range. The IPv4 techniques do not extend to IPv6 easily. The primary reason is that with 128 bit addresses and no reason to expect the degree of locality seen in IPv4, storage requirements for IPv4 sets and bags would be so dominated by pointer hierarchies that even modest numbers of badly distributed addresses could exhaust memory. The proposed research will investigate a number of data structures that would provide set and bag functionality for IPv6 addresses and other large, sparse index sets. Among the structures that will be considered are balanced trees (used as a base for comparison due to its non-linear space and time requirements). Structures that have order constant time look up and insertion include bloom filters, cuckoo hashes, and perfect hashes. These also have space requirements that are approximately linear in the size of the expected size of set or bag being stored. Among the properties that will be considered are suitability of the structure for real time look up and update, the efficiency with which the typical set operations (union, intersection, differencing) and bag operations (addition, subtraction, multiplication, division, masking with a set, extraction of a cover set, and scaling (scalar operations on values) can be performed. The space requirements for both memory and file operations will be considered. In addition to technical reports that present the results of the analysis, a sample implementation will be produced. This will be designed as a library that can be used to replace the current set and bag implementations in the SiLK NetFlow analysis tools from CERT, but the core functionality will be designed to permit use of the library in other systems, as well. Potential applications such as arbitrarily large ACLs for firewalls (indexed by IP or IP/port combinations) and reputation systems.
Nick McKeown, John Lockwood
Electrical Engineering and Computer Science, Stanford University Support of the NetFPGA for High-Performance Networks
Sponsor: Pere Monclus
The NetFPGA platform is the only tool available that enables teaching and research on networking hardware. Students and researchers use an industry-standard design flow to create new designs that will run at line-rate. Since its public launch in Dec 2007, over 100 universities in 15 countries use NetFPGA. Stanford provides reference designs, gateware, software and support for free to all users, and maintains an online user community. In the next phase of the program, we plan a 10GE version, "NetFPGA-10". The board will have four ports of 10GE, a more advanced FPGA (Virtex5) and a PCIe interface to a host computer. We plan to design the board (with Xilinx) and make it available to schools for under $1,000 using donated parts from as many sources as possible. We will port our reference designs, training and tutorial support to new users. While the NetFPGA-10 board will require more funds than we are requesting here, we are asking Cisco to fund a portion of the development, to enable us to take this platform to a broad population of users.
Z. Morley Mao
Electrical Engineering and Computer Science, University of Michigan Performance and Accuracy Enhancement for Remote Network Management
Sponsor: Ammar Rayes
Automated remote network management of networks of diverse sizes and configuration settings is increasingly critical to ensure fast response to and early identification of network and network service problems. Many challenges exist in this problem space due to traditional manual and ad-hoc management practices. One important aspect of the problem solution lies in the statistical and anomaly-based analysis of the usage patterns of various network devices as this enables inventory tracking, misconfiguration identification, and detailed analysis of the current usage scenarios, enabling suggestions for future improvements. Depending on the types of networks, the specific goals of network management may differ across networks, e.g., corporate enterprise networks may focus on critical infrastructure services such as VoIP and video conferencing performance and capabilities. In contrast, ISPs more likely would emphasize on metrics such as stability, robustness, and performance of the network to ensure the satisfaction of customer SLAs. Thus, taking into consideration the network-specific goals and priorities in different network settings is important to devise relevant solutions. In this project we focus on several key enabling technologies for automated remote network management. We outline these challenges and our approaches here. For each challenge we associate it with one of these three tasks: (1) data collection and storage, (2) data analysis, (3) mitigation solutions, e.g., network reconfigurations, device upgrade, capacity provisioning, etc. To fulfill these tasks, we propose key techniques such as systematic tradeoff analysis, hybrid hierarchical data collection algorithms, longitudinal, cross-network data analysis, and data-driven mitigation responses. Our work presents an important step towards efficient and accurate automated management of both network services and enterprise networks. Furthermore, we intend to integrate our system with the ITIL (Information Technology Infrastructure Library) standard framework.
Electrical and Computer Engineering, Stanford University Millimeter-wave Wireless Network: High-speed, Secure Wire-like Connectivity
Sponsor: Sateesh Addepalli
Millimeter-wave wireless communication (60 GHz or higher) approaches the speed and security level as its wired counterpart, while maintaining the flexibility and portable convenience of wireless. As point-to-point millimeter-wave physical radios with electronically steerable arrays have been successfully implemented in CMOS with low power and small form factor (e.g. SiBEAM 60-GHz radio), we propose to bring it to another level and implement a fully infrastructureless network of millimeter-wave radios. This network can support multi-gigabit data link connectivity. It enables multi-gigabit wireless Ethernet in enterprise network, multi-gigabit WLAN and multimedia networking at home, car-to-roadside communications, and vehicle-to-vehicle communications.
Computer Science and Engineering, University of California, San Diego Scam Analysis and Defense via Botnet Infiltration
Sponsors: Scot Kennedy
Over the last five years, the capability of attackers to easily compromise large numbers of Internet hosts has emerged as the backbone of a vibrant criminal economy encompassing unsolicited bulk-email (spam), phishing, click-fraud, DDoS extortion and identity theft. At the core of virtually all on-line scams are large-scale botnets. The low marginal costs and large scale provided by this substrate allow scammers to mount campaigns where the marginal profit is very small.
By far the best known example of this activity is unsolicited bulk e-mail, or spam. However, while there is a considerable body of research focused on spam from the recipient's point of view, we understand considerably less about the spammer's perspective: how spammers test, target, distribute and deliver a large spam campaign in practice, how many different "spam crews" dominate the overall spam "market" and what is the ultimate return-on-investment for spam campaigns that drives the spam value equation. This proposal revolves around a new methodology --- botnet infiltration --- that exploits this opportunity to measure, analyze and manipulate spam campaigns from the inside. We propose to use this technique to drive a set of analytic and defensive applications, including: (1) measuring the conversation rate of large-scale spam campaigns, (2) developing fingerprinting techniques to identify different spamming cartels, (3) automatically parsing spam templates from infiltrated botnets to build highly-accurate dynamic spam filters that automatically adapt to changes in spam, and (4) exploring the underlying structural elements of the Internet underground economy by fingerprinting the back-end of the supply chain for scams.
Laurent Toutain, Rémi Després
Network and Multimedia, Telecom Bretagne Port-Extended IPv4 across IPv6 networks - Experimentation of a SAM based implementation
Sponsor: Mark Townsley
With the forthcoming IPv4 address shortage happening before IPv6 can be generalized, it is crucial to offer interoperability between applications running on IPv6 hosts and IPv4-hosts, a pure dual stack model being not sufficient for this.
The project deals with the use, through IPv6 networks, of port-extended IPv4 addressing (PEv4), implementation being based on Stateless Address Mapping (SAM), introduced in IETF in draft-despres-sam-01.
Compared to carrier-grade NATs with dual-stack lite (GGN/DSL), the most advanced proposal in IETF, this approach is inherently more scalable, being stateless in ISP infrastructures, and also greatly simplifies ISP service definitions (no detailed NAT functions to specify).
On the other hand, SAM/PEv4 depends on more upgrades in router CPEs, and, to reach its full potentiality, including end-to-end transparency at the network-layer, in hosts themselves. The goal of the project is to validate an operational implementation, to evaluate its impact on existing architectures, and to investigate its effect on typical user applications.
The 12-month project comprises: (1) detailed specification, and open-source-software implementation of PEv4/SAM, in hosts, in router CPEs, and in ISP border gateways; (2) evaluation of operational constraints on Telecom Bretagne's network, used as a testbed; (3) investigation of consequences port restrictions (in IPv4) on user experience; (4) contributions to IETF on lessons learned.
Electrical Engineering, Princeton University New Paradigms in Fountain Coding
Sponsor: Rodolfo Milito
We propose a research program on new paradigms and applications of fountain codes in content delivery over unreliable networks and in lossless and lossy data compression.
Rateless lossy data compression is desirable in problems where the level of fidelity is to be decided by the decompressor rather than the compressor. We propose to explore the turbo principle for rateless lossy data compression in conjunction with universal modeling algorithms.
Taking advantage of residual data redundancy at the decoder has proven to be effective within a traditional fixed blocklength approach. However, it is very attractive to couple it with fountain codes for error correction so that decoding can finish earlier than without exploiting the redundancy of the source.
Rateless fountain codes for two-way communication and joint compression/transmission will also be explored.
John D. Villasenor
Electrical Engineering, University of California, Los Angeles Globally Optimal Proportionally Fair Scheduling and Its Impact on Femtocell Access Policy
Sponsor: Sateesh Addepalli
While femtocells (more formally, "Home Node B", or "HNB") are now gaining significant traction and interest in the commercial community and in the associated standards bodies, there remain open technical issues that, if resolved, can significantly enhance the value of HNBs across the entire cellular ecosystem. Some of the most important such issues relate to scheduling and access policy. The scheduling problem involves determining, in a complex network comprising many mobile terminals ("User Equipment", or "UEs"), HNBs, and traditional "macro" base stations, and in which the traffic is a dynamic mix of voice, downlink data, and uplink data, how to allocate base station resources among multiple competing UEs. The access problem, which is closely related, involves the policies and actions that govern whether mobile stations are permitted to connect to nearby femtocells. The present proposal aims to address the question of how, in a network in which the active set for mobile stations contains one or more femtocells as well as one or more macro base stations, can service be fairly allocated given potentially complex and rapidly changing traffic loads and network conditions. In addition, the proposed research aims to identify how access policy can be used to improve the fairness of the resource allocations.
Electrical and Computer Engineering, Clemson University Traffic Analysis for Broadband Internet Streams towards Connected Vehicles over an Optical and Wireless Mesh Network Infrastructure
Sponsor: Flavio Bonomi
Connected vehicle technologies are viewed by the transportation authorities, automotive industries, and research communities as one critical solution to significantly enhance future vehicles' safety, mobility, productivity and economy. By enabling vehicles to wirelessly communicate with other vehicles as well as a roadside infrastructure, vehicles become constantly connected to a grid of information, control, and support services. The US government has driven the technology development via US DOT Intelligent Transportation System (ITS) and Vehicle-Infrastructure Integration (VII) programs, while the US automotive industries have formed the Connected Vehicle Trade Association (CVTA) and Vehicle Safety Communication (VSC) Consortium. As shown in the technology roadmap of CVTA's Connected Vehicle Proving Center (CVPC), the government and industries' initial focus was on dedicated short range communication (DSRC) and cellular technology for supporting collision avoidance and localized services. Focus of the upcoming phase is expected to be placed upon broadband data services over a heterogeneous wide area infrastructure of optical and wireless networking technologies. The proposed project aims to utilize the unique strengths at Clemson University to study the emerging challenges for broadband networking between connected vehicles and an optical backhauled wireless mesh network infrastructure through experimental studies and model development.
Barcelona Supecomputer Center Opportunities and Limitations for Multi-core Processors Running Network Applications
We propose to develop analytical models for multi-core multi-threaded
architectures. The analytical models will help characterize the performance
of different architectures under the workload generated by different traffic
and application scenarios. A type of questions that we intend to pose, and answer is: what is the optimal
number of active cores for a given level of entanglement? The intuition is
that the in some real world cases dependencies may grow in ways that negate
the benefits of further parallelization. Amdahl's model is the obvious
starting point, to be enriched with additional considerations. While there
is substantial parallelism to exploit in network applications, the potential
dependencies, which impact significantly the performance of multi-core
systems, cannot be ignored. Dependencies include both inter and intra-flow
packet dependencies, and entanglements even within different sections of the
workload triggered by a packet. We favour analytical models over simulations
for the insight they condense, and recur to the latter when necessary. Based
on our previous experience we estimate that the analytical models will be
able to predict performance of many of the target systems within 20% of the
actual achieved by the real system.
Universitat Politecnica de Catalunya Path-State Vectors
Although many arguments can be found in the literature stating that it is
time to replace BGP, the practical and economical implications associated
with its replacement are obstacles hard to overcome. The challenge nowadays
is to find ways to improve different aspects of the inter-domain routing
system that neither require the imminent replacement of BGP, nor the
development of upward-compatible extensions tending to make BGP even more
irreplaceable than it is today. In light of this, this research proposal
aims at developing the concept of a Path-State Vector (PSV), as a promising
and straightforward way to overlay some key functions out from BGP. In the
PSV model that we conceive, the distribution of reachability information
(i.e., the localizers) as well as the computation of loop-free paths are
kept in the scope of BGP (the underlay), whereas the critical issues
currently driving the replacement of BGP can be decoupled from the latter,
and managed through one or more domain-level overlays. These domain-level
overlays can be designed to address the convergence, TE, or security issues
in BGP. Unlike pure overlay networks, which simply circumvent BGP, the PSV
model that we devise should feed from and assist BGP, as well as offer an
effective coupling between BGP and the functions overlayed from the latter.
To this end, we propose that PSVs build a graph overlay. Our previous
research has shown that graphs inspired in link-state protocols cannot offer
a unique and consistent view of the forwarding paths of an internetwork
under the current export policies between domains. We have recently found
how to build a suitable graph for a PSV, and more importantly, this graph
does not violate the policy opaqueness required by ISPs, which provides new
and promising research opportunities. As a starting point, this initiative
proposes to focus on two objectives: (1) the design of a highly scalable PSV
protocol; and (2) exploit the graph overlay in a PSV to remove the path
exploration phenomenon from BGP, and therefore, drastically reduce the
convergence time on the Internet.
Centre Tecnologic de Telecomunicacions de Catalunya (CTTC) Distributed interface selection for seamless terminal mobility across
Next-generation terminals are equipped with multiple wireless (and wired)
interfaces to receive/send information through a variety of networks, each
one designed with a different access technology. The opportunity to transmit
over multiple interfaces by a multi-mode device can benefit both the user
and the network in terms of perceived quality and efficiency, respectively.
The research goal is to study techniques for the provision of the
information needed for a multi-mode mobile device to select the most
appropriate network interface to send/receive its traffic at each moment of
the communication and of course, the definition of the interface selection
Department of Mathematics, Statistics and Computer Science, Marquette University Second Generation Embedded Systems Curriculum and Laboratory
Embedded systems pervade our modern world, and their ubiquity is likely to expand even as the power and flexibility of embedded hardware also advances. Embedded systems are hard to design, build, test and study for a variety of reasons, including enormous system diversity, and tight resource constraints. Despite their growing complexity and importance, embedded systems are not well-studied in our nation's Universities and colleges. With Cisco's generous prior support, our work has led the way in developing replicable, flexible, cost-effective embedded systems laboratories for education and research; we propose to continue this successful collaboration to the next level. We will specifically emphasize the design of a low-cost, readily duplicable laboratory environment to support research and education on embedded networking devices. Our primary target device is the Linksys WRT family of consumer wireless routers.
Dept. of Electrical and Computer Engineering, North Carolina State University Exploiting Multicore Parallelism in the Control Plane
Sponsor: Michael Beesley
While much networking research is geared toward increasing throughput in the data plane, recent routers have experienced scalability problems in the control plane. The number of logical and physical interfaces is increasing, as is the complexity of router protocols and management services. This project will explore ways in which multicore processors can be used to address the scalability problems, in terms of both performance and programmer productivity.
First, we will select some open-source codes that are representative of interesting control-plane applications. Then, we will evaluate the concurrency present in those codes, determining which can benefit from a multicore approach. Finally, we will evaluate various architectural mechanisms (e.g., transactional memory, cache hierarchies and protocols, speculative memory accesses) that can improve scalability and performance, and/or make the parallel codes easier to write and maintain.
Department of Computer Science and Engineering, The Hong Kong University of Science and Technology A Robust Low-Delay Network Infrastructure for Large-Scale Peer-to-Peer IPTV Streaming
Peer-to-peer (P2P) IPTV streaming promises scalable and cost-effective solutions to deliver live video services to large number of distributed users. We study in this proposal how to design a robust low-delay P2P IPTV network infrastructure. The robustness comes from: 1) the presence of multiple parents for each peer (which is simply a set-top box). This is necessary due to asymmetric bandwidth and unreliability of peers; and 2) the introduction and integration of stable content servers or super-nodes to reduce delay and improve system stability. Based on our infocom'08 work, we will study low-delay mesh design with a certain robustness requirement. Then we study how to integrate content servers to form a seamless, low-delay and robust network. We will formulate the problems, propose distributed protocols, and study their performance via simulations, implementation and experimental measurements. The project will leverage upon the existing streaming platform in the PI's IPTV lab funded by the local government and industries.
Department of Computer Science and Engineering, University of South Florida Green Networking: Reducing the Energy Use of LAN Switches and Connected Hosts
Reducing the energy consumption of LAN switches and routers, and of connected hosts, is of interest for environmental, economic, and operational reasons. In this project, we propose to investigate 1) the ability to cycle an Ethernet link between on and off using PAUSE flow control and powering down during the off periods, and 2) using existing Cisco switch and router deep packet inspection capabilities to control traffic to sleeping hosts enabling the hosts to sleep longer and not be woken-up by spurious traffic. Our proposed Ethernet PAUSE Power Cycle (PPC) can allow a switch core or line card to largely power down during coordinated link off times. Preliminary results show that a 100 Mb/s edge Ethernet link with a 10 millisecond 50% on/off cycle has little, or no, perceivable effect to a typical user. PPC can be compatible with existing Ethernet NICs (unlike the proposed IEEE 802.3az Energy Efficient Ethernet Rapid PHY Selection method). Investigation of how to best schedule on/off cycles is needed. Internet hosts can sleep when inactive, but need support from the network to control spurious traffic that can cause unnecessary wake-ups. We will investigate how host power state can be communicated to a switch, and how existing capabilities (such as Cisco IPS and/or PISA application awareness) can be used to “protect” a host while it sleeps and still maintain full network presence of the host. Requested are $80,000 and equipment donations to support two Ph.D. students for two years. The students will be available for summer internships in 2008 and 2009 making it possible to directly exchange knowledge and research results with Cisco.
Cristian Estan, Somesh Jha
Department of Computer Sciences, University of Wisconsin-Madison High Performance Signature Matching
Sponsors: Flavio Bonomi, Pere Monclus, Robert Olsen, Sumeet Singh
The motivating application for this proposal is signature matching within intrusion prevention systems. The goal of this proposal is to design and evaluate signature representations that enable the matching operation from the data plane to proceed at high speeds, and algorithms for building instances of such representations without assistance from experts. This proposal builds in part on our prior work on XFAs as memory-efficient representation of regular expressions suitable for signature matching in software. We plan to address the following four specific research problems: 1) building a fully automated signature compiler for XFAs based on a detailed understanding of the core reasons behind the state space blowup for DFAs 2) investigating efficient implementations for traffic normalizers (e.g. for detecting character encoding) that can be applied before signature matching 3) investigating efficient representations for signatures that require not just regular expression matching, but also some level of protocol parsing 4) investigating signature representations that support matching algorithms able to process multiple bytes of input at a time.
Scott E. Fahlman
Language Technologies Institute, Carnegie Mellon University Scone Knowledge-Base System
In the summer of 2006, our research group at Carnegie Mellon University received a grant under the Cisco University Research Program to support ongoing research and development on the Scone knowledge-base system. The initial Cisco grant enabled us to make significant progress on Scone. In particular, we have improved and extended Scone's ability to represent and reason about the interaction of complex objects and tasks. We have also improved Scone's reliability, efficiency, and ease of use for demanding real-world applications. We are now requesting a renewal of that grant so that we can continue our work in these directions.
Management Science and Engineering, Stanford University Algorithms for a Robust Human Network
Sponsor: Flavio Bonomi
As the Internet has become more commoditized, and the Web has matured to allow for easy access to information (eg. Google/Yahoo), for easy interaction (eg. Skype, MySpace, YouTube), and easy commerce (eg. Amazon/eBay), the set of algorithmic challenges associated with the Internet have also evolved. Algorithms for this human network must account for human behavior, with all the associated variability. Traditional notions of robustness can be termed as "infrastructure robustness" which deals with issues such as viruses, worms, phishing, denial of service attacks, etc. But the human network also needs to be "socially robust" e.g. robust and secure against ranking spam, recommendation spam, privacy attacks, misleading information, free-riders in Peer-to-Peer systems etc. Social robustness is hard to achieve since it does not involve any improper use of the underlying engineering system. To give an example, a hotel manager making many fake identities and writing fake reviews to improve the rank of her hotel on an Internet travel portal is structurally indistinguishable from a genuinely good hotel getting good reviews from a large number of genuine users.
This proposal will study algorithmic techniques, economic incentives, and collaborative primitives that can enhance the social robustness of the Internet.
Computer and Information Science and Engineering (CISE) Department, University of Florida Human-Centric Networking: Interest-centric delivery via user profiling for future social networks
Sponsors: Debo Dutta, Sateesh Addepalli
Future networks will feature applications and devices that are highly personalized and human-centric. We envision future networks to support interest-based delivery, where an interest maybe based on behavior or user profile. Current communication paradigms; unicast, multicast, or directory-based services require explicit expression of interest or node IDs and use interest-oblivious protocols. These paradigms are inadequate for future social networks with need to support implicit membership based on interest as inferred by network protocols based on behavior profiles.
In this project we propose the first framework to provide the ability to efficiently navigate on-line and mobile societies, based on interest. We enable interest-centric delivery services through behavioral profiling via data mining techniques. In addition, we propose to design efficient context-aware protocols for future social networking that heavily draw from the insights and understanding gained through the data mining and analysis.
In the first year focus will be given to wireless (adhoc and DTN) networks with profiling performed at individual mobile nodes or at access points (when applicable). Then the problem in wired networks will be addressed with profiling done at aggregation points (e.g., routers).
We plan to use Small World models to capture relationships between users, Eigen vector similarity-matching for clustering and profile casting for forwarding. A test bed deployment will be used to test and evaluate our designs.
Ramesh Johari and Srinivas Shakkottai
Department of Management Science and Engineering (MS&E), Stanford University Content-Aware Distribution Networks
We propose a new architecture that we refer to as a Content-Aware Distribution Network (CADN). A CADN consists of three elements: a media vault (MV), service nodes (SNs) and user-end nodes (UNs). The SNs are elements that are owned by the content distributor, but have relatively limited knowledge about the system as a whole. The user nodes might be routers, PCs, or set top boxes installed in homes, and the content distributor has limited access to these. Thus, the three elements have decreasing levels of computation, storage and communication capabilities, as well as controllability from the center of the system. Note in particular that complete control over the MV and SNs can afford the CADN significant performance gains over traditional unstructured peer-to-peer file dissemination. The CADN uses awareness of the content characteristics — necessary QoS guarantees, popularity, and relationships with other pieces of content –- in a distributed fashion to decide when, where, and how long to position the content. An essential aspect is to exploit the UNs as cheap, but less reliable infrastructure elements.
Department of Computer Science, University of Waterloo Adaptive opportunistic communication
Sponsor: Sateesh Addepalli
With the widespread availability of multi-NIC wireless mobile devices, it has become necessary for users to choose which NIC to use for each application, and when to use that NIC. For instance, a user may want to send low-priority bulk data on a WiFi NIC that will become available some time in the future, because this reduces power usage as well as the dollar cost. Today, such decisions need to be made manually, which is both irksome to the user and potentially suboptimal. My research focus is on practical algorithms for four related problems that arise in this context:
How should a mobile device choose when to turn on a NIC to sense if there is a possibility of using it?
What algorithm should the device use to schedule transmissions both immediately, and in the future, given that it has only an approximate knowledge of the user's future mobility pattern?
Can the device learn the user's mobility pattern from the past movement history?
How can mobiles collaborate to help each other learn their mobility patterns? For instance, if a mobile in the car ahead of you on the road sees an open WiFi access point, it could tell you about it.
Solutions to these interrelated problems would help us build practical solutions for effectively using multi-NIC mobile wireless devices.
Edward W. Knightly
Electrical and Computer Engineering, Rice University Adaptive Protocols and Multimedia Services for Mesh Networks
Sponsor: Jan Kruys
Our completed project yielded three key outcomes: (i) expanded deployment of the TFA-Rice Wireless programmable mesh network to a user population nearing 2,500, (ii) analysis and contention-window policy solution to the TCP starvation problem, and (iii) a study of the impact of management overhead as network size scales.
We propose the following two projects for the next research phase. First, we will develop policies and algorithms to adapt MAC protocols to the diverse operating conditions encountered in large-scale deployments. Second, we will ensure high quality transmission of VoIP and interactive multimedia traffic via development of accurate capacity assessment tools and QoS routing and MAC policies. In both projects, we will employ measurement-driven protocol design as our methodology, utilizing our 2,500 user urban mesh network deployment.
Stephen M. Kosslyn
Psychology Department, Harvard University Human Networks on the Web: The Role of Social Prosthetic Systems
If you were missing a leg, you would be supplied with the modern equivalent of a wooden leg—a prosthesis, which fills in for a missing capacity. Similarly, if you were asked to multiply two 10-digit numbers together, you would use a calculator—which is a cognitive prosthesis, filling in for deficient cognitive capacities. The main cognitive prosthesis we rely on is other people, who extend our intelligence and help us understand and regulate our emotions. I plan to study how such "Social Prosthetic Systems" develop and are used over the web. Social Prosthetic Systems are genuine systems: Both parties interact to address a specific task or problem, but one party sets the agenda and the other acts as a prosthesis. I have published regarding why people would agree to serve in this capacity, and am interested in how such relationships can be formed and supported over the web.
Kathryn S. McKinley
Department of Computer Sciences, The University of Texas at Austin Automation of Source Code Analysis
One proven method for improving software reliability and quality is to use managed languages, such as Java, which provide: memory and type safety, automatic memory management, dynamic code execution, and well defined boundaries between type-safe and unsafe code. New software development is increasingly choosing managed languages to improve programmer productivity and to produce more reliable and correct programs. However, because of time-to-market demands and prohibitive development costs, CISCO and others continue to maintain and enhance their legacy C and C++ systems.
To transition legacy software to managed languages, this project will develop methods and tools for incremental transitioning of legacy C software to a mixture of Java and C. The research will use the Jeannie programming language which seamlessly enables programmers to switch between Java and C within the same file. We will use a scalable composable design in which our tools will add a small amount of ``glue'' and otherwise reuse existing compilers, debuggers, and other tools. We outline a longer term research vision that includes a novel program development environment, including mixed-language compilation support, a transition design analysis tool, a transition refactoring tool, mixed-language debuggers, and dynamic analysis. This environment will facilitate and ease the transition of a program written in a sequential, unsafe language to one written in a managed language. We propose specific work on (1) experience with legacy code rewriting to inform the design of transition design analysis and transition refactoring tools, and (2) a scalable mixed-language debugger.
If successful, the benefits of this research will include methods and tools for enabling more productive programmers, high performance, and more reliable software.
Computer Science Division, Open University of Israel Zvi Lotker
Department of Communication Systems Engineering, Ben-Gurion University of the Negev Methods for Developing Efficient Multicore Algorithms
Increasingly, memory access patterns are recognized as a key performance factor both for uniprocessor and multiprocessor machines. We propose to study issues in the interface between the software and the cache mechanism, in order to improve the efficiency of high throughput / low latency algorithms.
The proposal focuses on three aspects of this interface:
Replication and layout of memory in order to maintain locality of reference in the cache. In cases where some memory items are frequently accessed together with many other items, not all of them can fit in the cache, locality of reference can be increased by replicating those "hub" memory items.
Cache management that incorporates knowledge about locality of reference in the access pattern. Here locality of reference may be modeled using a graph or a Markov chain whose states are the memory line, and edge signify that one memory line is likely to be accessed after an access to its adjacent memory lines.
Trade-offs between the length of the computation and the size of the algorithm description. At first approximation, when an algorithm is executed repeatedly on many different inputs, if its description is lengthy the system my be taxed by L1 cache misses even more then the possible gain in the number of computation steps.
Burkhard Stiller, David Hausheer, Hasan
Department of Informatics (IFI), University of Zurich Scalable and Robust Decentralized IP Traffic Flow Collection and Analysis (SCRIPT)
Sponsors: Ralf Wolter, Benoit Claise
The increasing number of IP flows over future very high-speed links will become a challenge to traditional centralized solutions for IP traffic flow collection and analysis due to the high demand of storage and processing resources which are limited and costly. Major research has been done in finding smart sampling methods that reduce the number of IP packets and IP flows that need to be processed and stored while keeping a high level of accuracy. While sampling has proven to be a valid approach to reduce the processing and storage load, for certain applications such as usage-based accounting and intrusion detection which require a high-level of accuracy, the use of sampling methods alone will not suffice for a centralized solution to scale to the increasing and highly variable load in terms of IP flow records to be collected and analyzed.
Thus, the goal of this project is to develop a scalable and robust decentralized architecture (called SCRIPT) for collecting and analyzing IP flow records with the necessary level of accuracy. The key idea is to utilize resources of a large number of nodes, which collaboratively store and process IP flow records in a highly scalable, robust, and flexible manner. Furthermore, the project aims to develop self-configuration mechanisms that will allow new nodes to be easily added to or removed from the flow collection and analysis network. An important advantage of this approach is the possibility to gradually increase storage and processing capacities compared to a complete replacement of devices when the number of IP flows increases. Finally, by offering fast access to multiple-resolution aggregation of flow data, SCRIPT will be applicable to several IP traffic analysis scenarios such as flow accounting, flow path monitoring, and distributed intrusion detection systems (IDS).
Computer Science Division, University of California, Berkeley Stabilizing BGP, Safely
Sponsors: Clarence Filsfils, Pradosh Mohapatra
Route instability is widely recognized as a major problem in the Internet, inflating infrastructural costs and worsening data-plane performance. Route flap damping provides some protection against instability, but introduces pathologies and reduces availability.
With concerns about the scalability of the routing system prompting a renewed interest in stability, we believe it's time for a more principled approach to stabilizing Internet routing. The goals of this proposal are to (1) identify classes of techniques for improving stability within the context of the existing BGP protocol, and (2) characterize what can be accomplished within each class. Our preliminary results suggest that significant improvements in stability can be obtained without the loss of availability associated with route flap damping, indicating a promising approach to safely stabilizing BGP.
Department of Electrical and Electronic Engineering, University of Melbourne Rich Delay Measurement
Sponsor: Pere Monclus
An important yet neglected problem in router performance monitoring is that of delay measurement. For network applications, end-to-end delay, or the time it takes a packet to traverse the network, is one of the two canonical performance measures, the other being loss. The impact of delay on real-time applications such as VoIP and video conference is clear, but it is also important for the performance of the transport layer, particularly high speed TCP, and is one of the metrics subject to SLAs.
Despite its importance, surprisingly delays are not included in statistics commonly measured and reported in routers or switches. This project aims to establish the principles of, and identify and resolve the barriers for, implementing efficient and comprehensive delay measurement in routers. We will build on our prior measurement work of a Cicso GSR, which established principles and methodology for the scalable measurement of a new, rich metric of delay performance. This metric is based on recording the duration and amplitude of busy periods, and allows a detailed and performance centric view of delay and congestion, and its relationship to utilization. The project will determine how best to implement this and other related schemes into existing router architectures, and also examine the measurement of other kinds of traffic-class specific delay.
Computer Science, University of British Columbia User-Level SCTP stack
We propose to port the current FreeBSD SCTP protocol stack to execute within user-space to investigate issues related to middleware, multicore processors, and transport protocol on-load versus off-load. The work builds on our experiences in using SCTP in the design of middleware for MPI (Message Passing Interface), a library for parallel and distributed computing.The main advantages to implementing a network protocol stack in user-space is that a user-level stack is usually more portable and it is easier to test, develop and deploy new features for the protocol. Other benefits of a user-level stack include faster detection and diagnosis of stack and network problems as well as a convenient tool for teaching. User-level stacks also provide the means to take advantage of high-performance network interfaces to support zero-copy and kernel-bypass. MPI middleware provides an interesting and challenging environment for exploiting the advantages of a user-level stack. A user-level stack will make it possible to investigate issues related to high-performance network interfaces and the effective use of multiple cores, which are active areas of research in message-passing systems like MPI. Our hope is that this user-level implementation of SCTP can be the basis of a high performance IP-based transport protocol for compute clusters and grid environments.
College of Computing, Georgia Tech Application Flow Management and Service Assurance
In recent years, significant progress on flow measurement techniques has been made to allow for accurate estimation of Quality of Service (QoS) received by individual transport layer (TCP or UDP) flows, in terms of parameters such as bandwidth, delay, and jitter. However, little work has been done on relating these QoS parameters to users' Quality of Experience (QoE), defined as the perceived quality of network connectivity as reflected by the observed run-time performance of enterprise network applications. This problem is very important since the successful inference of this relationship leads to accurate prediction of the QoE from provisioned QoS, which enables us to allocate the network resource in such a way that the QoE of enterprise users is maximized in a robust fashion. In this project, we propose to systematically study this exciting and challenging research problem. We plan to formulate it as a supervised machine learning (SML) problem. We expect to obtain some experimental data (i.e., training data) from different sources (e.g., Cisco, students surveys at Gatech) that record the QoE (provided by human subjects) and QoS parameters of past executions of various network applications. We will try to infer the complex relationship between QoE and QoS using advanced SML techniques such as support vector machine (SVM).
Department of Computer Science, University of California, Santa Barbara Kamil Sarac
Department of Computer Science, University of Texas at Dallas Bridging Support in Mixed Deployment Multicast Environments
Sponsor: Greg Shepherd
This proposal is a second-year extension of a previous grant. The focus of our grant this past year was on developing solutions to support multicast in the last-mile of network deployments. These networks are highly heterogeneous and often do not support native multicast. Without end-to-end support, native multicast is not available as a solution and neither are its benefits. One particularly promising technology being standardized by the IETF is Automatic IP Multicast Without Explicit Tunnels, commonly known as AMT. The develop of AMT has reached a point where its functionality is in the early stages of implementation and deployment. In the previous year, we developed implementations for AMT relays and gateways; collaborated with Cisco to test the interoperability between a router-based solution and our end-system-based solution; and investigated several multicast security vulnerabilities. In the second year of our proposal, we plan to expand the support for our AMT solution, continue interoperability testing, deploy a joint solution on a large scale, and see the AMT draft through to RFC status. At the end of the next year, we fully expect AMT to be an integral part of the Internet's multicast infrastructure.
Computer and Information Sciences Dept, University of Delaware Improving SCTP with Non-Renegable Selective Acks (NR-SACKs)
Sponsor: Randall Stewart
Since cumulative acknowledgments (acks) were defined for TCP in RFC793, two significant mechanisms have extended the concept of data acks in providing end-to-end reliable transport layer data transfer: SACKs and Duplicate-SACKs. We propose to design and investigate a further extension: the Non-Renegable SACK (NR-SACK). NR-SACKs would supplement SACKs by identifying out-of-order data that has progressed to become the sole responsibility of the receiver, such as but not limited to data that has been delivered to the receiving application. NR-SACKs are primarily proposed to improve throughput for the Stream Control Transmission Protocol (SCTP). SCTP's multistreaming service divides an end-to-end transport association into independent logical data streams. Data that arrives in-order within a stream can be delivered to a receiving application even if that data is out-of-order relative to other streams. A transport sender has no reason to maintain a copy of delivered data in its retransmission queue.
The term non-renegable refers to the fact that, according to current TCP and SCTP specifications, data that has been acked by a SACK, but not yet by a cumulative ack can be reneged. That is, the transport receiver can discard the SACKed data, thus requiring the transport sender to retransmit it. However, situations exist when a transport receiver knows that reneging will never take place. NR-SACKs are a mechanism to share this information with the transport sender. This research will (1) formally define the semantics of NR-SACKs in an Internet Draft, (2) estimate the potential throughput benefits of using NR-SACKs in SCTP via ns-2 simulation, and (3) demonstrate technical feasibility and actual benefits by deploying NR-SACKs into the latest FreeBSD version of SCTP.
Department of Electrical & Computer Engineering, University of Texas at Austin Network Coding's Impact on Ad Hoc Network Capacity
Sponsors: Xuechen Yang, Jan Kruys
The goal of this research is to understand the actual viability and impact of these network coding schemes in a practical network setup. Network coding - in a wireless network - relies on nodes being in certain positions to help with routing. While nearly any set of positions allows for some savings with network coding, some configurations are much better than others. We will use stochastic geometric tools to model typical locations in the network and make predictions on the capacity impact that network coding is likely to have. This research will be challenging because it requires a notion of routing and end-to-end communication to be considered, but these aspects are difficult to incorporate when talking about network capacity. Indeed, even a well-accepted definition of capacity is lacking in such a scenario.
Centre for Advanced Internet Architectures, Swinburne University of Technology Heuristics to reduce BGP Update Noise
Sponsors: Pradosh Mohapatra, Clarence Filsfils
Our goal is to provide a number of relatively simple mechanisms that could be implemented in BGP implementations that would significantly reduce the update processing load of BGP speakers in the default-free zone of the Internet. We believe that such a work could have a significant impact on the scaleability prospects of BGP in coming years. We also envisage that this approach would have implications on the prospects for deployment of secure BGP mechanisms, given that the same form of heuristics applied to update sequences would assist in the deferral of security processing of updates which are marked by such heuristics as short-term transient routing states.
Department of Computer Sciences, University of Wisconsin, Madison Optimized the Smart Rule Cache for Robustness, Security, and Efficiency
Sponsors: David Tsiang, Doron Oz
We recently proposed a new approach to enable fast classification that scales efficiently with increasing volume of flows and transmission rates in the Internet. Our proposed approach, Smart Rule Cache (SRC), minimizes use of expensive and power-intensive Ternary Content Addressable Memories (TCAMs) within a router for classification tasks without affecting either classification speed or its accuracy. Internet backbone traces, indicate that, using our approach, the volume of TCAM required in line cards can be reduced by a few orders of magnitude, leading to significant cost and energy savings.
The goal of our continued effort in the next year would be to study a number of detailed design questions for SRC, going beyond the basic design.
They include (i) hardware requirements and structure of the router line card, (ii) potential for parallelism in its design, for greater scalability, and (iii) design, analysis, and evaluation of its potential vulnerabilities, as well as countermeasures against malicious attacks.
This work has already received funding from the Cisco Research Center for one year in which the basic ideas were developed and demonstrated. Many further challenges will be evaluated in the coming year.
Efi Arazi School of Computer Science, Interdisciplinary Center Herzliya (IDC) Danny Hendler
Computer Science Department, Ben-Gurion University of the Negev (BGU) Ronny Roth
Computer Science Department, Technion: Israel Institute of Technology (TEC) Fault-Resilient TCAMs
Sponsor: David Belz (dbelz)
Ternary content-addressable memories (TCAMs) are increasingly used for high-speed packet classification. TCAMs compare packet headers against all rules in a classification database in parallel and thus provide high throughput unparalleled by software-based solutions.
While all memory types are susceptible to errors stemming from bit upsets, TCAM memory is subject to even higher error rates due to the high density of TCAM chips. Solutions for regular memory (such as SRAM and DRAM) are inapplicable to this unique type of memory which poses entirely new challenges. In regular memory, the input is the address and the output is the value residing in this address. In TCAM memory, the input is a value and the output is the lowest address, if any, whose content matches the value.
In this proposal, we suggest to explore new ways of making TCAM devices resilient to errors, by adding TCAM-specific detection and correction codes.
Centre for Advanced Internet Architectures, Swinburne University of Technology FreeBSD Implementation of an SCTP friendly NAT
Sponsor: Randall Stewart
Network Address Translation (NAT) is typically used to share a single Internet address amongst a number of users. Extending the common approach used in NAT implementations for TCP and UDP to the SCTP protocol is not viable - the SCTP protocol specification would require checksums for the whole packet (not just the header) to be re-calculated for each packet - particularly for small home router implementations. Further, SCTP also offers multi-homing which offers new challenges for the NAT code to track in a single SCTP connection. We propose to develop a NAT implementation to support SCTP to be released for the FreeBSD 6.2 (or its replacement as of August 2007) platform. Our release code will utilise an existing NAT framework such as ipfw or ipf such that it can be practically deployed on real systems. The NAT will track SCTP connections via the Verification Tag (VTag) field and retain connection details should one end of a multi-homed session change end-points. We also propose to test this implementation under a number of different usage and failure-mode scenarios, the results of these tests will be published and can be used to promote the use of SCTP "in-the-wild".
Department of Computer & Information of Science & Engineering, University of Florida Optimizing Access Control Lists
Sponsor: Bo Zou
The configuration of a firewall may contain numerous ACLs that are associated with physical network interfaces, logical network interfaces, and tunnels. Optimizing the ACLs has huge impact on the firewall performance; verifying the correctness of the ACLs with respect to the end-to-end policies is also critical. Research has shown that hand-configured firewall configurations are not only inefficient but also error-prone. According to the PI's experience with CSPM, the security management tools can also produce highly inefficient configurations. This project will provide a comprehensive study on the important subjects of ACL optimization and verification. Efficient algorithms that address these problems will significantly improve the firewall performance, reduce the configuration errors, and, by relieving the administrators from such heavy-duty tasks, cut down the time for constructing the firewall configurations. Finally, the algorithms may directly contribute to the future success of Cisco's security management software.
Romit Roy Choudhury
Dept. of Electrical & Computer Engineering and Dept. of Computer Science, Duke University Exploiting Smart Antennas for Wireless Networks
Sponsors: Jan Kruys and Fred Anderson (freda)
In the past, a vast majority of wireless networking protocols have explicitly or implicitly assumed an omnidirectional antenna at the radio layer. With recent advances in signal processing and antenna technologies, smart antennas have become a viable alternative. Smart antennas are analogous to spotlights, in which energy can be focused in a desired direction, over longer distances. In addition, when receiving with smart antennas, signals can be selectively filtered from the unwanted interference. Our research was among the first to show that selective transmission and reception can lead to significant performance benefits in wireless networks. However, we also observed that the benefits cannot be realized by merely replacing omnidirectional antennas with smarter ones. A holistic network architecture is necessary that inter-weaves higher layer networking protocols with advanced antenna capabilities. As antennas begin to scale in terms of cost and size, we believe that this is the moment to harness the potentials of antenna technology in wireless networking systems. This proposal, Spotlight, aims to achieve this goal.
Department of Electrical and Computer Engineering, University of Delaware Beamforming in IEEE 802.11n for Wide-Area Applications
Sponsors: Brett Douglas, Jan Kruys
MIMO systems have been extensively studied and are now fairly well-understood. The antennas in a MIMO system can be employed in a number of ways. A spatial multiplexing gain can be achieved by transmitting independent data streams over individual antennas. The maximum gains are achieved when the channel is known at the transmitter and receiver and transmit precoding and receiver weighting are performed. Alternatively, the same stream can be sent over each antenna to obtain a diversity gain. When the channel is known a the transmitter, the data stream can be optimally weighted to maximize performance. If the channel is not known, diversity gain can be obtained using a space-time code. The IEEE 802.11 standard has been very successful, and the extension in 802.11n promises (i) increased data throughput through spatial multiplexing; and (ii) increased range through exploiting spatial diversity. In particular, for the high-throughput case, the transmitter is designed to operate with one to four independent data streams using one to four antennas, and several modes are described including spatial multiplexing, space-time block coding, or some combination. In addition, there are options for beamforming in which the transmitter utilizes the knowledge of the MIMO channel to improve reception at the receiver. The Spatial Multiplexing (SM) and Space-Time Block Coding (STBC) modes are fairly well understood. The advantage of transmit beamforming, given the overhead required, is less so; this is the focus of the currently funded project. Moreover, the application of 802.11n in outdoor environments and the effects (and mitigation of) interference have received very little attention; this is the focus of our new proposal.
Department of Computer Science, University of Massachusetts, Amherst In-Network Complex Event Processing over Distributed Streams
Sponsor: Krishna Sankar
In this proposal, we identify core functions of complex event processing (CEP), including filtering, aggregation, correlation, transformation, and predication, and argue for in-network implementation to make computer networks proactive and adaptive. The combination of CEP, a new stream processing paradigm, and its efficient in-network implementation, presents significant challenges that have not been sufficiently addressed before. In this project, we devise novel automata-based mechanisms and appropriate communication protocols for efficient pattern detection across distributed streams. We also propose initial extensions of these mechanisms to address a rich set of issues related to pattern predication, out-of-order and out-of-sync events, and multi-pattern detection. We plan to collect use cases, including trace data and typical patterns, from financial, healthcare, and network monitoring applications to evaluate our proposed algorithms and protocols.
College of Computing, Georgia Institute of Technology Ingress Traffic Engineering and Performance Routing
Sponsors: Dana Blair, Monique Morrow
In earlier research, we investigated the performance and stability of outbound traffic engineering in Optimized Edge Routing (OER). In this project, we will expand that research in two new directions. First, we will focus on ingress traffic engineering, again in the context of OER, but without using BGP. The motivation is that several multihomed stub networks do not run BGP, and further, using BGP for ingress traffic engineering raises concerns about the size of the DFZ routing table and the volume of BGP updates. Instead of BGP, we will investigate methods that conduct ingress traffic engineering using measurement-based DNS name resolution. Our research will model, simulate and experiment with the effectiveness of this DNS-based method, the impact of DNS caching, and the reliability of the associated active probing. In the second thread of this project, we will generalize the previous methods for outbound/inbound optimized edge routing in the direction of intradomain Performance Routing (PfR). PfR allows a router to dynamically sense congestion, or other network impairments, and dynamically reroute traffic through a better path. Our research in this area will focus on the fundamental concern: how to make sure that such dynamic routing will be stable and that it will actually improve application performance. Our preliminary results show that the appropriate measurement techniques combined with a necessary router coordination protocol can provide stable and effective adaptive routing.
Department of Computer Science, University of Toronto Traffic Burstiness and Buffer Sizing in Internet Routers
Sponsor: Valentina Alaria
Recent theoretical results in buffer sizing research suggest that core Internet routers can achieve high link utilization, if they are capable of storing only a handful of packets. The underlying assumption is that the traffic is non-bursty, and that the system is operated below 85-90% utilization. These results can have significant consequences in design of all-optical routers -- where storing packets for long periods of time is not feasible -- as well as electronic routers. In this project, we will develop a test-bed for experimental evaluation of buffer sizing requirements of Internet routers. Such experiments are extremely difficult in today's Internet: backbone network operators tend not to like any change, and even if they cooperate, modifying network components, architecture, and specially traffic is extremely difficult. We will use NetFPGA -- a PCI-form factor board that contains reprogrammable FPGA elements, and four Gigabit Ethernet interfaces -- to build configurable live traffic generators, programmable routers (with finely tunable buffers), as well as high precision buffer occupancy monitors, all designed specially for buffer sizing experiments. Using this test-bed we will experimentally study the buffer size requirements of Internet core routers.
Department of Mathematics and Computer Science, Lehman College of the City University of New York Nancy Lynch
Electrical Engineering and Computer Science, MIT A New MAC-Layer Paradigm for Mobile Ad-Hoc Networks
Sponsor: Ralph Droms
The wireless MAC layer affects higher-layer network protocols in many ways, requiring that the usual Internet strategies for routing, security, and reliable message delivery be rethought. Node mobility requires further rethinking of the higher layers. We propose a lower-layer paradigm for communication and mobility-hiding to solve two difficult problems for Mobile Ad Hoc Networks (MANETs):
Successful message delivery in the presence of malicious adversaries.
Message routing over large, mobile networks.
The proposed lower-layer communication paradigm uses multiple channels, either frequency-based or time-based, to avoid collisions and to foil adversaries. The proposed mobility-hiding mechanism is virtual nodes. A virtual node is associated with a geographical region and is implemented by physical nodes that are in the region. Virtual nodes are stationary and so support use of traditional wire line protocols. However, a virtual node may be more likely to fail than a normal node, changing the behavior of the traditional protocols.
We will use a combination of simulation and analysis to evaluate the performance of communication and routing algorithms based on this paradigm. We will base these evaluations on abstract complexity measures such as percent of messages delivered, total number of messages sent and received, and latency of message delivery. Because of complex interactions between various design decisions at the MAC and internet layer, we will not attempt detailed performance studies. Instead, we will use the results obtained by simulation to suggest analytical results, subject to proof, to be used as guidelines for network design. This project is intended to be an exploratory project, to determine the feasibility and value of the new lower-layer communication paradigm. If the results justify further work, we will extend the project using additional funding sources.
Computer Laboratory, University of Cambridge Applied Metarouting
Sponsor: David Ward
Network connectivity is implemented using dynamic routing protocols. Today these protocols are few in number and are not well suited for many networks. Existing protocols are pressed into service in highly complex and contorted ways. This leads to high cost of operations, lack of flexibility in the face of new demands, and low levels of network robustness. The Metarouting project proposes a radically new approach to network routing. The basic idea is to implement a metalanguage for defining routing protocols that could be used by network operators to define new protocols that meet the needs of their networks. The routing metalanguage is based on a firm theoretical framework that allows protocol specifications to be automatically checked for correctness.
Computer Science Department, Connecticut College Shared Bottleneck Detection and Response Mechanisms For Concurrent Multipath Transfer (CMT)
Sponsor: Randall Stewart
Concurrent Multipath Transfer (CMT) extends the Stream Control Transmission Protocol's (SCTP's) multihoming capabilities for concurrent transfer of new data between source and destination hosts via two or more end-to-end paths. In prior research, we inquired into design considerations, established feasibility, and investigated performance benefits and tradeoffs of CMT. CMT is now part of the FreeBSD SCTP stack. We now seek to resolve a vital open question, whose answer will mature CMT technology significantly. Our research thus far assumed that paths used in CMT do not share any bottleneck links (i.e., points of congestion). Indeed, this restrictive assumption is the major reason for the IETF's hesitancy in accepting CMT. We propose to relax our assumption and investigate shared bottleneck detection and response mechanisms for CMT. In particular, we propose to investigate mechanisms for a CMT sender to (i) detect the presence of a shared bottleneck, (ii) share congestion state when a shared bottleneck is detected, and (iii) seamlessly migrate between using shared and separate congestion state during an association, as per need. We will incorporate our results into the FreeBSD implementation of CMT.
Department of Electrical and Computer Engineering, Iowa State University Survivable Network Operation Using Network Coding
Sponsor: Iftekhar Hussain
Survivable network operation requires networks to be able to detect failures as soon as they occur, and to then reroute traffic over alternate paths. Two objectives, which are usually contradictory, are the requirements to recover from failures expeditiously, and to minimize the resources reserved by the network to recover from those failures. In this project we propose to use network coding to achieve both objectives. With the use of network coding, different sessions combine their signals on shared protection circuits, hence reducing the amount of required resources, while always providing receivers with backup copies of transmitted signals, therefore allowing instantaneous data recovery. The project will develop network coding-based protection strategies for single link failures, and will then extend the strategies to protect against multiple link failures. Implementation strategies in different protocols, such as IP and MPLS, will also be developed, and additional router and switch functionalities to implement the proposed protection techniques will be introduced. Hybrid strategies, combining different types of protection techniques, including network coding, 1:N and M:N protection, will also be investigated as a means of further reducing the amounts of required resources, while guaranteeing an upper bound on the data recovery time.
Department of Computer Science and Engineering; Department of Electrical Engineering, The Pennsylvania State University Per-flow state management in Internet routers: mass purging and heavy-hitter detection
Sponsor: Cetin Seren
In the context of a surveillance system in an Internet router for all active TCP sessions, we will consider two problems. The first is the problem of simultaneously purging a potentially enormous number of deemed-stale TCP sessions and the second is identifying "heavy hitter" TCP sessions. Our approaches will be sensitive to the limited hardware and software resources allocated for this purpose in a linecard in addition to the very high data rates that modern linecards handle, specifically we are interested in avoiding excessive I/O to free-list memories. We propose to investigate three alternatives for memory purging: an opportunistic system that does employ a free-list, a framework involving logical swapping of a 1-bit flow enable and touch vectors that does not employ a free list, and finally a randomized system that also identifies the heavy hitter sessions with no additional state.
Office of the Vice President for Research, Rochester Institute of Technology Advanced Networking Infrastructure Projects for Computing and Collaboration
Sponsor: David Jaffe
The Research Computing group at RIT is building a research computing infrastructure to support the growing needs of researchers across all disciplines. High performance networks are a crucial part of the overall plan that is targeted to address two specific needs for research use:
High-speed/low-latency interconnects for high performance message passing and data transfers within clusters and across clusters in grids
High-speed/low-latency multipoint connections for live sustained uncompressed high definition multicast video streams
Current solutions for these needs include expensive, proprietary systems that are very specialized for their targeted needs. We propose to evaluate commodity, standardized solutions that are less expensive, more universal, and easier to support, such as 10 Gigabit Ethernet. We feel that the performance advancement of such generic networking solutions make them an attractive alternative to traditional dedicated solutions.
We propose to install, benchmark, and evaluate these in terms of a cost/benefit analysis, especially by being able to implement a single solution for these multiple needs. While such standard solutions as 10 Gigabit Ethernet are virtually commodity products now, they are not universally being adopted as readily as one might expect. Our intent is to demonstrate the viability of integrating easily available high performance network components into an advanced research computing infrastructure.
RIT currently does research on advanced collaboration environments (ICE Lab, http://www.rit.edu/~rc/ice.html), grid development (NYSGrid, http://www.nysgrid.org), and networking protocols (NSSA, http://www.nssa.rit.edu). Personnel experienced in these research areas will support and guide the deployment and testing of the proposed advanced networking capabilities. This project is planned to take six months from initial design to first testing of results, with another six months for a complete evaluation.
Department of Electrical Engineering and Computer Science, Northwestern University Diagnosing Spatio-Temporal Internet Congestion Properties
Sponsor: Bruce Davie
The ability to accurately detect congestion events in the Internet and reveal their spatial (i.e., where they happen) and temporal (i.e., how frequently they occur and how long they last) properties would significantly improve our understanding of how the Internet operates. Moreover, the ability to accurately pinpoint congested locations in real time is useful for fault diagnosis, design of advanced delay-based congestion control protocols, and for efficient overlay-network construction. We propose to design and implement a novel measurement methodology and tool, which we call Pong, capable of accurately revealing spatio-temporal Internet properties. Pong (i) uses queuing delay as indicative of congestion, and (ii) strategically combines end-to-end probes with probes targeted to intermediate nodes. It (iii) achieves high sampling frequency and dramatically improves spatial detection granularity (i.e., from path segments to individual links), (iv) considerably enhances the measurement quality by adjusting the probing methodology based on the observed path topology, and (v) deterministically detects moments of its own inaccuracy. We propose to deploy a triggered-based monitoring system and utilize Pong measurements in the wide area Internet with the following goals: (i) Reveal joint spatio-temporal congestion characteristics of today's Internet, (ii) study congestion patterns and understand correlation across multiple geographic areas and time-scales, (iii) expose the role of intra- and inter-AS routing and peering policies on inducing congestion events, and (iv) identify how network disruptions (and consequent re-routing events) affect spatio-temporal congestion properties in a given Internet area.
Computer Science and Engineering, Penn State Security for Internet/IMS Convergence
Sponsor: Cetin Seren
The deployment of the IP Multimedia Subsystem (IMS) will mark the beginning of a large-scale convergence of telecommunications networks and the Internet. The unification of these systems through an all-IP core will permit cellular providers to seamlessly support both traditional voice and expanded data services. However, such interconnection will also allow many of the security problems common in the Internet (e.g., Denial of Service, core network element compromise, malware-generated traffic) to directly impact the telecommunications infrastructure. In response, this work proposes to characterize the impact of such attacks and mitigation. This will have a direct impact on Cisco IMS product offerings, such as the family of products that comprise the Cisco Service Exchange Framework Products and Solutions. Specifically, we aim to characterize attacks and determine the feasibility of a class of solutions targeted at preventing overloads in the network.
Department of Electrical and Electronic Engineering, University of Hong Kong Network Parameter Representation and Quality of Service Routing in the Internet
Sponsors: Kirk Lougheed, Fred Baker
Computer applications nowadays are very diversified in terms of network requirements. Conventional approach of representing metrics using numbers is not sufficient. For instance, we may want to describe survivability in terms of how likely a path can survive in different situations. In this case, a single number may not be appropriate. Even though some metrics can be represented by single values, when a parameter consists of more than one metric, such as delay and bandwidth, conventional representation is still problematic because comparisons are not trivial. To enhance the effectiveness of routing protocols and the reliability of networks, different kinds of network parameters representation have to be introduced. In this project, we aim at investigating how to represent and manipulate network parameters to facilitate more effective and accurate distribution of route advertisement so as to find more feasible paths for user services and failure recoveries in the Internet.
Electrical Engineering and Computer Science, Stanford University Accurate Network Timing and Synchronization
Sponsor: Tom Edsall
The IEEE 1588 “Precision Time Protocol” (PTP) provides hardware support to synchronize multiple clocks in a Gigabit Ethernet network to within 50ns or less. We are interested in two questions: (1) What are the theoretical and practical limits to clock synchronization in a network of switches and routers?, and (2) If there is precise network-wide clock synchronization, what are the implications on network protocols and applications? We suspect it will lead to big simplifications in how networks operate, and could simplify important applications where the timing and ordering of events is important.
Z. Morley Mao
Department of Electrical Engineering and Computer Science, University of Michigan Automating Network Service Management through Holistic Dynamic Views
Sponsors: Ammar Rayes, David Jaffe
Network services are becoming increasingly difficult to manage due to immense complexity in software and dependencies across service components. Various tools and management services exist, but each is usually created for a specific network service in mind and lacks complete automated management for common and newly developed network services. We tackle the problem through an integrated approach making use of holistic dynamic views by developing a bottom-up understanding of how services are actually being used and configured during run-time operations. This understanding enables automated configuration management for diverse network services, detecting misconfigurations and suboptimal configurations. We propose the following techniques to manage network services covering both offline analysis as well as online decision support.
Intelligent agents to perform offline bottleneck discovery: We develop intelligent agents to automatically test and analyze the usage scenarios of new applications by emulating user behavior through perturbation testing and protocol fuzzing. From this, we can develop metrics for desired user-perceived service behavior which can assist in troubleshooting deviations from expected behavior by making suggestions to configuration changes.
Data-mining historical service requests: We perform data mining on historical service requests to cluster related problem areas. Given a new service request, our tool can efficiently search for related problems and corresponding solutions. We apply semi-supervised machine learning to identify the most suitable solutions for a service request based on past history information. We improve on indexing of resolved trouble-tickets, their symptoms, associated solutions, automatically capture the client application state, and combine with server state to suggest solutions. We propose a tool, which given a set of symptoms related to a service request, make intelligent recommendations based on history information and effectively visualize the relevant history service resolutions.
Client-side diagnosis system to perform local troubleshooting: We develop a client side diagnosis system that passively captures client interaction with the network service, performs local diagnosis, proactively reports possible problems to the server, while integrating with the "Smart Call Home" system by correlating with problems experienced with local systems to detect potential misconfiguration at the client side or server side.
Online service management: the service management component proactively predicts failures and configures services robustly to reduce the downtime and complexity of the network maintenance. Depending on the customer requirements, it automatically discovers the customer's running environment and configures the services at the server-side accordingly.
Jim Martin and James M. Westall
School of Computing, Clemson University DOCSIS 3.0 Channel Bonding Scheduling Algorithms and Issues
Sponsor: Randall Stewart
We propose a simulation-based study of scheduling algorithms for DOCSIS 3.0 systems. Building off our past work in developing and validating a DOCSIS 1.1 model with the 'ns-2' open-source simulation package, we will add channel bonding capabilities to the model. The objectives of the project are to: 1) develop and validate a simulation model of an HFC network that supports upstream and downstream channel bonding; 2) develop and validate a baseline scheduler for both upstream and downstream; 3) develop a more advanced scheduler that incorporates the ideas underlying Cisco's Low Level Queuing (LLQ) scheduling policy.
Nicholas F. Maxemchuk
Department of Electrical Engineering, Columbia University Multi-path Routing
Sponsor: Alvaro Retana
In this project we concentrate on multi-path routing in MANET's. Multi-path routing in MANET's is currently of interest in military networks. However, there are an increasing number of high bandwidth applications that are sharing the Internet with conventional users. Distributing the requirements of large bandwidth users over multiple paths will reduce their impact on the rest of the network. We expect multi-path routing to become increasingly important in the entire Internet.
The objective of this project is to determine the most effective means for providing multi-path routing within the framework of the wired line routing protocols used in the current generation of routers. We will investigate multi-path schemes that can support proportional routing, redundant and non-redundant dispersity routing, and network coding.
The mechanisms that we will consider include multi-topology routing, multiple anchor point networks with standard OSPF routing, and, multiple anchor point networks with geographic/robotic routing between the source, the anchor points and the destination.
School of Engineering and Applied Sciences, Harvard University Hashing and Sampling Algorithms and Data Structures for Network Measurement, Monitoring, and Applications
Sponsor: Flavio Bonomi
Hashing-based and sampling-based algorithms and data structures are playing a growing role in networking hardware, enabling richer applications and natural methods for providing approximate measurement and monitoring primitives. In this proposal, we focus on a wide spectrum of questions relating to how to best take advantage of hashing and sampling within the network. At the high level, we consider possible designs for a near-ubiquitous, flexible hashing infrastructure that would allow approximation schemes for a variety of network measurement and monitoring tasks. This focus is motivated by the great value we see from hash-based structures, including their relative simplicity, flexibility, and cost-effectiveness. The goal of a general hashing infrastructure would not only be to handle issues that have already arisen in today's network, but also to provide a general framework for handling additional, currently unknown problems that may arise in the future. In particular, we plan to focus on examining how local hash-based structures can be combined to yield larger-scale synopses of network characteristics and performance. At the low level, we focus on the architectural design of hash-based on sample-based algorithms and data structures. Specifically, we consider how current ideas from theory can be best implemented and utilized in actual network hardware, emphasizing the analysis of actual performance and costs of the variety of alternative approaches.
Department for Electrical and Communications Engineering, Helsinki University of Technology Colin Perkins
Department of Computing Science, University of Glasgow Adaptive Error Measurement, Concealment, and Repair for IP Streaming Video
In addition to error concealment/repair mechanisms, IPTV systems will include management functions to automatically diagnose infrastructure problems. We propose to enhance such tools by cross-stream correlation and aggregation of reception quality reports to more effectively diagnose network problems.
We assume IPTV content is distributed by a two-level network. Distribution is via SSM from a (small) number of sources, via intermediate nodes acting as repair agents, to receivers. Receivers send RTCP reports [RFC 3550/3611] and rapid feedback [RFC4585,CCM]. Repair agents aggregate feedback, and report to the source.
We investigate feedback for three reasons:
Per-stream media repair: Intermediate nodes perform local repair, either reactive (e.g. retransmission or reactive FEC) or proactive (e.g. network coding). We will not develop new repair mechanisms, but consider pushing repair to homes with multiple receivers (e.g. by link local multicast between receivers) rather than performing all repair in-network. This will speed repair if heterogeneous loss occurs between receivers in a home (e.g. on home Wi-Fi).
Per- and cross-stream diagnosis: Depending on topology, single media feedback may not suffice to locate problems. If streams are offered from multiple sources via partly non-intersecting paths and repair agents, cross-correlation will allow more precise problem location. This network tomography is possible if channels are sourced from different locations (e.g. local advert insertion, multiple providers, P2P, in-home repair).
Quality metric aggregation: Feedback is desired to monitor overall reception quality. Many metrics are conceivable, depending on codec. We do not analyse/suggest specific metrics, but rather consider aggregation of typical metrics to enable constant feedback. Results will be qualified per diagnosis functions addressed in (2) to separate in-network and in-home errors.
The project will use simulation peered with real video traces.
Harry G. Perros
Department of Computer Science, NC State University Multi-Domain and Single Domain Route Selection under QoS constraints
Sponsor: Tsegereda Beyene
We believe that now is an appropriate time to work towards a wider deployment of QoS. Cisco, as one of the stakeholders, has an interest in seeing broader adoption of QoS in multi-provider networks, both public and private. The stakeholders include network service providers, network equipment vendors, large enterprise users of networking services, application designers and other service innovators. We believe that a multi-vendor, multi-provider effort is critical to success, as open standards will ultimately be needed to foster widespread deployment of QoS that reaches beyond the boundaries of a single provider's network.
The proposed research aims at resolving some of the issues associated with the wider deployment of QoS. It was discussed with Ms. Tsegereda Beyenne (email@example.com) and it builds on a proposal by her and co- authors for the optimal route calculation across several domains using the concept of the path computation element (PCE). Ms. Beyenne will advise us during this project.
Electrical Engineering and Computer Science, Stanford University Congestion management schemes: Their impact on switch and router buffer sizes, and a new framework for modeling their performance
Sponsors: Flavio Bonomi, Tom Edsall
The impact of congestion management schemes on the size of buffers required at routers and switches is considered. Specifically, we consider non-TCP sources, non-RED queue management schemes and a small number of sources. A particular scheme that we are developing along with Cisco researchers for the IEEE 802.1 Data Center Ethernet standardization process is our main motivation.
A second major theme of the proposal is a new method for analyzing the performance of congestion management schemes, based on the Cavity Method of Statistical Physics, that is more comprehensive and more widely applicable.
Electrical and Computer Engineering, University of Arizona Channel Access and Connection Establishment in Multi-Channel Wireless Networks
Sponsor: Russ White
Wireless networks are employed as back-bone networks in metro areas and old residential neighborhoods due to cheaper installation costs compared to wired networks. In such scenarios, employing multiple (orthogonal physical layer) channels and directional antennas at nodes can significantly increase the achievable throughput. The goal of this research is to evaluate the alternatives for operating a multi-channel wireless network when employed as a backbone network. In this research, we develop and evaluate: (1) channel access protocols for multi-channel wireless networks with omni-directional and directional antennas; and (2) routing and channel assignment with channel discontinuity constraint under dynamic traffic scenarios. We will evaluate the above-mentioned characteristics through analytical models and simulations.
Department of Electrical and Computer Engineering, University of Arizona Sustainable Multipath Routing in Packet-Switched Networks With Minimum Overhead
Sponsor: Russ White
Colored trees is an effective mechanism for achieving disjoint multipath routing with no packet overhead. The colored tree approach constructs two trees, namely red and blue, rooted at given destination such that the path from any node to the destination on the two trees are (node or link) disjoint.
In this research, we propose to study (1) the maintenance of colored trees under node additions and deletions; (2) the alternatives for all-to-all multipath routing for packet-switched networks; (3) the effectiveness of the colored tree construction/maintenance algorithms in guaranteeing performance during reconfiguration; (4) the ability of TCP connections to exploit the increased bandwidth available from multiple paths. We will evaluate the performance both theoretically and simulation experiments.
Sanjay G. Rao and Charlie Hu
School of Electical and Computer Engineering, Purdue University Classification of Distributed Hash Table methods, and their suitability to various application domains
We seek to conduct a comparative study of DHT designs evaluating their suitability for communication applications like audio/video conferencing (e.g. voice over IP) and broadcasting. In particular, we focus on using DHTs to optimize both the control path, e.g. to locate a conference/broadcast, given the contact information of the remote participant or conference, and the data path, e.g., for selecting an intermediate peer in the overlay network for relaying data packets.
Our proposed research seeks to (i) identify and distill fundamental design parameters in DHT designs that impact its effectiveness; (ii) classify existing DHTs such as Chord and Pastry based on choices made with respect to these
design parameters; and (iii) study the impact of these design parameters on performance, reliability, overheads, security and ability to handle Network Address Translators (NATs). Our approach is experimental, involving real implementations and workloads, rather than theoretical worst-case bounds. The expected outcome is a set of recommendations on the range of acceptable choices for each design parameter, as a function of the setting.
School of Electrical and Computer Engineering, Purdue University Monitoring Peer-to-Peer Networks for Anomalous Traffic
Sponsor: Navindra Yadav
We seek to design algorithms to monitor traffic of peer-to-peer systems, and detect deviant traffic behavior. The motivation is two fold:
(i) Normal behavior in peer-to-peer applications exhibit characteristics similar to an Internet worm due to their many-to-many download profile This is a common cause for false positives; (ii) Software bugs, design limitations, and DDoS reflector attacks exploiting peer-to-peer systems, may lead to undesirable traffic patterns.
To address these issues, we will: (i) Characterize traffic of peer-to-peer systems with regard to metrics used in anomaly detection based on data collected from operational peer-to-peer systems, and analyzing system behavior under scale and churn; (ii) Study the implication for traffic characteristics for detection algorithms; and (iii) Study the interplay between system design, and ease of monitoring the system.
Department of Computer Science, NC State University Stability of Congestion Control: Metrics and Protocols
Sponsor: Larry Dunn
Can we design a congestion control protocol that can be stable independent (or less dependent) of packet buffers smaller than the full bandwidth and delay product? This proposed study takes two approaches to answer this question. We first define a set of reviewed metrics for stability and study their implications on the general well-being of the Internet, and then experimentally validate the reviewed metrics and apply the lessons learned to the design and implementation of new congestion control protocols.
Department of Computer Science and Engineering, University of California, San Diego Data-based security policies and miscreant infrastructure analysis
Sponsors: Doug Comer, Flavio Bonomi, Patrick Peterson
In this proposal we focus on two problems resulting from this environment – providing precise controls on data use inside the enterprise and better understanding the infrastructure used by miscreants to send unwanted data (SPAM, Phishing and malware) from outside the enterprise.
Dept. of Computer Science; Dept. of Electrical Engineering, Columbia University Reputation Services for Spam Classification
In this proposal, we exploit reputation services, social networks, and local information to create white lists for spam detection. Only messages from strangers, i.e., those that have not previously communicated with the receiver via IM or email, need to be checked against the social network. Stranger email that is not found in the recipient's social network undergoes normal, but more intrusive, spam filtering, based on content, graylisting, payment-at-risk, Turing tests or other mechanisms. This approach greatly reduces the chance of false positives.
A reputation service assigns a trust score to an identity, which may be a single user, an organization or a domain. Each person is represented by a node and has a link to all his immediate acquaintances. Numerous social networking sites exist on the Internet today, as instantiations of different social networks (LinkedIn, Facebook, Orkut, ...).
We want to avoid exposing complete social networks to strangers, to protect user privacy and to encourage participation. Rather, we envision that such sites have APIs that allow spam filters to query whether and how strongly user A is linked to user B.
While there may be no direct links between 'strangers', domain links are likely to exist. (This is clearly only appropriate for non-webmail domains, such as cisco.com or columbia.edu.)
We consider distributed computation of trust scores using Pagerank like algorithms to be prone to collusion problems and easily subvertable by malicious users. Instead, we advocate the task of computing reputation to be delegated to multiple specific organizations known as Reputation Authorities (RAs).
We plan to evaluate these algorithms and implement a prototype.
Alex C. Snoeren
Department of Computer Science and Engineering, University of California, San Diego Designing Router Primitives to Monitor Network Health
Sponsor: Doug Comer
We propose designing new router primitives that can accurately and efficiently monitor the health of a network in a scalable fashion. If a problem is detected (whether a connectivity problem or a performance problem such as high rates of loss or jitter), the primitives can help localize the problem to facilitate rapid repair. Current ad hoc monitoring techniques such as end-to-end network probes do not scale to timely full, all-pairs coverage, and leveraging them for localization requires indirect inference using heuristics or effort by expert human operators. In contrast, our router-based primitives lend themselves to automated diagnosis and localization of enterprise networks and ISPs, which can considerably reduce operational expenditures and improve customer satisfaction.
Department of Electrical and Electronic Engineering, University of Melbourne A Green Internet
Sponsors: Jeff Allison, Garry Epps
The aim of this project is to develop a model of energy consumption in the Internet and to use this model to analyze the growth of energy consumption as the size and capacity of the network increases. We will focus on developing an understanding of technological barriers to growth of the network. The analysis will use models of the scaling properties of IP networks, based on fundamental considerations of network capacity, physical limitations of key technologies and an analysis of the inter-relationship between energy consumption and information flow.
Department of Computer Science and Engineering, The Pennsylvania State University Resource Management in Virtualization-Based Consolidated Hosting Platforms
Sponsor: Vithal Shirodkar
This project will develop a resource management infrastructure called River for emerging data centers employing server virtualization for consolidating heterogeneous OS/applications. Such consolidation is desirable due to the associated cost reductions. Virtualization introduces several new features - overheads of virtualization, new stability and optimality concerns raised by the fast migration capabilities, new resource usage monitoring and accounting issues, the emergence of a hierarchical scheduling structure - that necessitate a fresh look at resource management. Our solution will span multiple spatial and temporal granularities, from fine time-scale scheduling at the VMM-level to coarser time-scale provisioning at the data center level. An enhanced VMM kernel, called eVMM, will implement a variety of improved resource management algorithms to enable robust performance under high consolidation: (i) an IO aware CPU scheduling algorithm that would help reduce the performance degradation of I/O-intensive applications, (ii) a scheduler-aware memory manager that would assist the CPU scheduler in continuing to provide fair CPU allocations even under high memory pressure, and (iii) mechanisms to allow VMM schedulers to dynamically tune their operation to the overlying operating systems schedulers. Our dynamic resource provisioning mechanisms will be implemented within a system-wide resource manager called the Control Plane. First, we will devise mechanisms to refine the predicted resource needs of applications using simple statistical techniques such as Linear Regression. Second, we will develop a scalable optimization framework in the form of an Allocator to dynamically re-provision resources to hosted applications. Finally, we will employ feedback control-based approaches to: (i) ensure the stability of provisioning decisions and (ii) bound deviations from optimal operating regimes due to modeling/prediction errors and workload fluctuations.
Department of Computer Science and Engineering, University of California, San Diego Algorithms and Infrastructure for Shared Mesh-based Video Distribution
Sponsor: Doug Comer
The goal of this proposal is to investigate algorithms and architectures to enable scalable, high-performance video distribution to large numbers of nodes distributed across the Internet.
The idea is to develop a set of technologies that can be combined to create federated video distribution utilities that support the simultaneous delivery of a wide variety of content to overlapping sets of clients with statistical quality of service assurances.
By enabling a single service to dynamically recon?gure itself based upon the content it is serving at any particular moment, this work will allow autonomous service providers to pool their resources to construct distribution networks of scales never before thought possible.
Department of Computer Science, University of California, San Diego Flexible High-Speed Parsing for Network Devices Architecture
Sponsor: Flavio Bonomi
As speeds increase, the complexity of parsing for protocols can become a bottleneck. At the same time, as routers integrate more services, it becomes important (at least in the enterprise space) for routers to parse complex new applications protocols, some of which hide behind standard TCP ports. Besides speed, flexibility is also an important goal: it is important to design a flexible parsing block to which the specifications of a new (say P2P) protocol can be added after the device is operational.
Standard techniques for flexible parsing (e.g., Pathfinder in software, Cisco FlexParser in hardware) represent the parsing process as a tree-shaped state machine where parsing a field corresponds to a node in the tree, which can then lead to a number of child nodes where further fields are examined. Unfortunately, each level in the tree adds an interpretation cost and so the cost of flexible parsing in this way is high. On the other hand, in some situations it is possible to extract all the relevant fields and directly jump to the leaves of the parse tree using a large CAM, which is expensive in storage. We propose research into a new style of parsing that tries to use an intermediate stance where a limited amount of parallelism at each tree node can be used to generate parse trees that are both fast and memory efficient.
Alan E. Willner
Communication Science Institute, Department of Electrical Engineering, University of Southern California Techniques for Enhancing WDM Transmission Performance when using Advanced Modulation Formats at 40 Gb/s and Beyond
Sponsor: Loukas Paraschis
An explosion of excitement has erupted in the optical communications community at the prospect of using data modulation formats that are more advanced than simple ON/OFF keying. Specifically, the simple and higher-order formats related to differential-phase-shift-keying (DPSK) hold the promise of increased tolerance to chromatic dispersion and nonlinearity, higher receiver sensitivity and spectral efficiency, and reduced electronic speed requirements for the same data rate. Unfortunately, advanced modulation formats tend to require more complex receivers, such that DPSK demodulation typically requires phase-sensitive delay-line interferometers. Moreover, future networks might wish receivers to recover different bit rates and modulation formats in order to accommodate heterogeneous traffic, and yet advanced receivers tend to be fixed in bit rate and format.
At data rates ranging from SONET-relevant 40 Gb/s to Ethernet-relevant 100 Gb/s, we will pursue novel DQPSK receiver designs that enable more stable, reconfigurable, and cost-effective operation for future high-performance optical networks. We will build on past successful collaborations with Dr. Loukas Paraschis, our Cisco Champion, to explore systems limitations and potential applications of our stable and reconfigurable receiver techniques. Specific projects will include: (a) demonstrating a reconfigurable receiver that can be readily tuned to recover different bit rates and modulation formats, (b) demonstrating a non-coherent receiver that reduces by half the number of required interferometers, (c) demonstrating a stable and low-cost fiber-Bragg-grating (FBG) that replaces the interferometer, and (d) determining the network limitations placed on polarization-multiplexed DQPSK 100 Gb/s systems by fiber-based chromatic, polarization, and nonlinear effects. If successful, our research will enable increased flexibility and simplicity in next-generation high-performance optical systems, and at lower cost.
Department of Electrical and Electronic Engineering, The University of Hong Kong Load-balanced Switch Architecture for High Speed Routers
Sponsor: Flavio Bonomi
A major bottleneck of high-speed router design is its switch architecture, which concerns how packets are moved from one linecard to another. A load-balanced switch architecture is configured according to a pre-determined and periodic sequence of switch configurations. It is attractive because no central scheduler is required and close to 100% throughput can be obtained. But it also faces two major challenges: packet mis-sequencing and poor delay performance. We have designed a feedback-based switch architecture to simultaneously address these two challenges. In this project, we aim at exploring techniques to further enhance the performance of our feedback-based switch architecture under non-uniform and multicast traffic conditions.
Ben Y. Zhao
Department of Computer Science, UC Santa Barbara Classification of Distributed Hash Table methods, and their suitability to various application domains
Structured overlays, also known as DHTs, provide infrastructures for Internet-scale applications. Many protocols (>20) have been introduced since 2001, each varying in how they perform routing and state management. Given the maturation of these protocols and their integration into products (VoIP, Windows networking), it is more important than ever to understand the fundamental issues in choosing the right protocol and parameters for each application.
We (PI Ben Zhao and students) propose to take a detailed examination of the state of the art in analysis of current structured p2p protocols. We will take a systematic look at classifying protocols based on topology (e.g. Torus, Hypercube, DeBrujin graph) and existing applications based on how they leverage the infrastructure interfaces (e.g. rendezvous in Scribe, group formation and maintenance in Cashmere, resilient routing, and replicated storage in CFS). We will group similar protocols, and perform detailed tests of representative protocols on a shared simulation/emulation testbeds. We will use measurement traces to drive environmental conditions such as network churn, network load and failures, and availability of memory and CPU resources. =With comprehensive tests, we seek a thorough understanding of the impact of parameter choices and as well as fundamental differences between protocols.
The PI is uniquely qualified to undertake this research, since he is the architect of one of the original 4 DHT proposals (Tapestry), and has taken key roles in standardizing the common interface for these protocols (KBR) as well as leading the analysis into their performance, attack resilience and security properties.
Department of Computer Science, University of California, Santa Barbara The Last Mile: Building the Final Piece in One-to-Many Content Distribution
There is a need to support multicast capability in the Internet's expansion at the edges (such as in consumer broadband networks). One particularly promising technology being standardized by the IETF is Automatic IP Multicast Without Explicit Tunnels, commonly known as AMT. The development of AMT has reached a point where relay and gateway functions need to be implemented and deployed. This proposal describes our planned efforts to develop software that offers the final piece of the multicast puzzle.
Computer Science and Engineering Department, University of Washington
A new class of general-purpose data management systems, called stream processing engines (SPEs), supports the needs of monitoring applications. The goal of existing SPEs is to provide low-latency processing of data that streams in from geographically distributed sources. Although monitoring applications focus on the current state of the system, when events of interest occur, we posit that historical information is necessary to explain these events and determine appropriate responses. The goal of this project is to explore techniques for enhancing the near-real-time information produced by an SPE with relevant historical data.
Olivier Bonaventure, Pierre Francois
Computer Science and Engineering Department, Université Catholique de Louvain ICIM : Improving the Convergence of IP Multicast Routing Protocols
This project addresses the problem of fast recovery for multicast traffic after network topology changes. Two types of topology changes exist: urgent (such as sudden link failures) and non-urgent changes (such as changes in IGP metrics, manual link shutdowns, or linkup events). In this project we will rely on traces to characterize these events and evaluate their impact on multicast trees. Second, we use simulations to evaluate the convergence time of current PIM-SSM implementations in ISP networks. Then we rely on the ordered FIB updates extensions proposed for IS-IS to develop extensions to allow PIM-SSM routers to converge without packet losses after a non-urgent topology change. Finally, we propose fast-reroute techniques able to protect IP multicast traffic.
Computer Science Division, University of California, Berkeley MultiView Videoconferencing
Preservation of gaze (eye contact or gaze perception) is important for trust-building and persuasion among videoconference participants. The only way to preserve gaze in a group conference is using a MultiView display, which allows each participant to see a distinct, personal view of the other side. For this project we are developing and evaluating new MultiView systems and how they impact group-to-group communication. The goal is to build systems that, for the first time, reproduce all the advantages of a face-to-face meeting.
Nelson da Fonseca
Computer Engineering Department, State University of Campinas
Dynamic Traffic Grooming with Support to QoS in IP over WDM Networks
Wavelength-division multiplexing (WDM) is the switching technology for the new generation of optical Internet, as it provides bandwidth at very large scale. The disparity between the bandwidth available in lightpaths and the demand of label switch paths (LSPs) has motivated the multiplexing of LSPs to a single lightpath. Our aim is to develop effective and efficient techniques for dynamic grooming of subwavelength LSPs into a lightpath without ignoring QoS requirements.
Computer Science and Engineering Department, University of Wisconsin-Madison High Performance Intrusion Prevention in Software
This proposal focuses on finding algorithmic solutions that improve performance measures of software-based intrusion prevention systems. We wish first to improve throughput by extending the operation of traditional deterministic finite automata with counters in a way that reduces the number of states required for an automaton to match a set of signatures. This reduces the number of automata required to implement the full set of signatures. Second, we wish to improve performance by tuning algorithms and data structures to multicore processors, exploiting the parallelism of multiple threads without introducing extensive locking and synchronization overhead, and improving cache locality by careful scheduling.
Computer Science Department, University of California, Riverside Automated Traffic Classification: Benchmarks and Novel Tools
In the area of traffic classification and (its special case of) intrusion detection, there are two main challenges. First, there is no benchmark to assess and compare different approaches. Second, most approaches are either nonintuitive (based on obscure machine learning and statistical techniques), or intuitive approaches relying on human intervention or heuristics. Our first goal is to develop the first publicly available benchmark, based on carefully anonymized data collected from our campus with many traces at multiple levels (campus, department, lab). Second, we shall introduce statistical techniques into BLINC, a traffic classification technique we have developed, in a way that does not take away from its highly intuitive operation.
Computer Networking Department, Cornell University
End-Middle-End Internet Connection Establishment
This proposal is an extension of a previous award, “Next Generation NAT and Firewall Traversal,” in which we focused on the problem of establishing data connections between hosts behind NATs and firewalls in a way that allows all stakeholders in the connection (endhost, enterprises, ISPs) to invoke their security policies. The technology, called NUTSS, is a name-based signaling architecture that couples off-path and optional on-path signaling components. In the next year we shall work on the goals of the EMERG (End-Middle-End Research Group) and in addition produce a publicly releasable NUTSS library, which we shall evolve to reflect EMERG output.
Department of Computer Science, University of Houston
Optimizing Collective File Operations over InfinBand, Gigabit Ethernet and Mixed Network Interconnects
For many applications in industry as well as in academic environments, a key challenge in the handling of large data sets is being able to access large files simultaneously with multiple processors and applications in the most efficient manner without generating inconsistencies in the file or the file system. In this project we will develop optimizations for collective I/O operations over InfiniBand, Gigabit Ethernet, and mixed network interconnects by analyzing the occurring communication patterns, then optimizing the resulting group communication using message segmentation, explicit handling of communication hierarchies, and using hardware support such as multicast if applicable.
Department of Math and Computer Sciencev Lehman College
Address Assignment in Traditional and Ad Hoc Networks
We shall carry out a theoretical and experimental study of algorithms for reliable assignment of IP addresses in spite of server failures, network failures, and node mobility. For traditional IP networks, both wired and wireless, we examine the proposed failover protocol for DHCP. For MANETs, we will examine and propose new algorithms for assigning IP-compatible addresses reliably and correctly. The central goals are to develop new insights into the best design for address-assignment mechanism for traditional networks and MANETs, to develop innovative algorithms for address assignment, and to explore improved methods for expressing requirements for Internet protocol standards.
ECE and CS Departments, Rice University
Achieving High Performance and Fairness in Multihop Wireless Access Networks
The first year of this project yielded four key outcomes: (1) expanded deployment of the TFA-Rice Wireless Mesh to nearly 1000 users, (2) completion of a measurement-driven study of mesh network deployment factors, (3) development of an analytical model to predict each flow's throughput in an arbitrary multihop topology of 802.11 nodes, and (4) development and study of new protocols to counter severe unfairness and starvation. In the next phase of this project we shall design, implement, and deploy a zero-overhead congestion control algorithm that operates purely at the wireline gateway; we shall also pursue measurement-driven protocol design, in which our discoveries from field measurements drive the design of optimized protocols.
Computer Science Department, Indiana University
Exploiting Multi-Path Routing for Collective Communication in MPI
Although Message Passing Interface (MPI) implementations have made considerable efforts in tuning the local communication stack to optimize point-to-point communication operations, optimization of collective routines in MPI requires information about the global network and is less well-studied. Accordingly, we shall develop collective routines for MPI that can fully exploit large-scale high-performance InfiniBand-based networks. In particular, we will study the use of the multiple equivalent paths between leaf and core switches in large-scale InfiniBand clusters to reduce switch-to-switch congestion during collective operations and therefore increase collective performance at scale.
Electrical Engineering and Computer Science, Stanford University
NetFPGA: An Open-source Teaching and Research Tool for Programmable Network Hardware
NetFPGA is a low-cost prototyping system for teachers and researchers for the design, implementation, and deployment of real networking hardware. NetFPGA is a low-cost PCI card with four Gigabit Ethernet ports, a large FPGA, and some buffer memory. Users implement designs using an industry-standard design flow (Verilog). NetFPGA will be made available at cost to educational institutions and nonprofits; Xilinx will donate the FPGA components, IP, and tools. In the proposed work, we will develop a self-supporting user community to allow NetFPGA to scale for networking hardware developers, in the way the open-source community has done for software.
Computer Science Department, University of Texas at Dallas
The Last Mile: Building the Final Piece in One-to-Many Content Distribution
There is a need to support multicast capability in the Internet's expansion at the edges (e.g. , consumer broadband networks like DSL and cable. One particularly promising technology being standardized by the IETF is Automatic IP Multicast Without Explicit Tunnels, commonly known as AMT. The development of AMT has reached a point where relay and gateway functionality need to be implemented and deployed. In this proposal, we describe our planned efforts to develop the software to offer the final piece of the multicast puzzle.
Mathematics and Computer Science Department, Massachusetts Institute of Technology
Prediction Intelligence in the Network
The objective of this project is to address a set of questions about the utility, design, and placement of intelligence in the network. The work proposed here will concentrate on learning and prediction to improve routing. Because routing decisions are made inside the network, this will imply placement of certain amounts of learning and reasoning in the network, although the balance and tradeoffs with respect to communications costs, memory usage, complexity, and deployability remain key factors in our research. We also intend to be able to extend our work on routing to a DTN-like environment. This work comprises the core of the doctoral dissertation of Rob Beverly, under the supervision of Dr. Karen Sollins.
Computer Science Department, University of British Columbia
Compute- and Data-Intensive Processing using MPI over SCTP
Our goal is to provide a robust and secure system for the execution of data- and compute-intensive applications in cluster and grid environments. We propose to design, implement, evaluate, and release SCTP-based middleware for support of message-passing programs using MPI (Message Passing Interface). We intend to extend the design to MPI-2 to support dynamic processes and RDMA, and investigate the potential advantages of using SCTP in a cluster environment. We will integrate our SCTP-based middleware into MPICH2 and Open MPI, two public domain versions of MPI, to allow others to use and experiment with SCTP and MPI in cluster and grid environments.