Skip redundant pieces

Electrical Engineering and Computer Science

Defense Notices

EECS MS and PhD Defense Notices for

All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.




Upcoming Defense Notices


QI SHI - Application of Split-Step Fourier Method and Gaussian Noise Model in the Calculation of Nonlinear Interference in Uncompensated Optical Coherent WDM System

MS Project Defense (EE)

When & Where:
June 12, 2015
2:00 pm
246 Nichols Hall
Committee Members:
Ron Hui, Chair
Chris Allen
Erik Perrins

Abstract: [ Show / Hide ]
Wavelength division multiplexing (WDM) is a technology of combining a number of independent information-carrying signals with different wavelengths into the same fiber. This enables us to transmit several channels of high quality, large capacity optical signals in only one fiber simultaneously. WDM is the most popular long distance transmission solution nowadays, which is widely utilized in terrestrial backbone and intercontinental undersea fiber optics transmission system. Extremely effective and efficient analysis method of WDM system is always indispensable due to two reasons. In the first place, the deployment of WDM system is usually a time and money consuming project so that an accurate design is always required before construction. Secondly, optical network routing protocol is based on expeditious and veracious real-time evaluation and prediction of network performance. Two main distinct phenomena affecting the overall WDM system performance are amplified spontaneous emission (ASE) noise accumulation and nonlinear interference (NLI) due to the Kerr effect. The ASE noise has already been well understood but the calculation of NLI is complicated. A popular way called Split-Step Fourier (SSF) method, which directly solves the nonlinear propagation equation numerically is widely used to understand the pulse propagation in nonlinear dispersive media. Though the SSF method can provide an accurate result of NLI, its high computation expense prohibits satisfying the efficiency requirement mentioned above. Fortunately, Gaussian Noise (GN) model, which to a large extent resolves this issue has been proposed and developed in recent years.



SIVA PRAMOD BOBBILI - Static Disassembly of Binary using Symbol Table Information

MS Project Defense (CoE)

When & Where:
June 1, 2015
10:30 am
250 Nichols Hall
Committee Members:
Prasad Kulkarni, Chair
Andy Gill
Jerzy Grzymala-Busse

Abstract: [ Show / Hide ]
Static binary analysis is an important challenge with many applications in security and performance optimization. One of the main challenges with analyzing an executable file statically is to discover all the instructions in the binary executable. It is often difficult to discover all program instructions due to a well-known limitation in static binary analysis, called the code discovery problem. Some of the main contributors to the code discovery problem are variable length CISC instructions, data interspersed with code, padding bytes for branch target alignment and indirect jumps. All these problems manifest themselves in x86 binary files, which is unfortunate since x86 is the most popular architecture format in desktop and server domains.
Although much of the research work in the recent times have stated that the symbol table might be of help to overcome the difficulties of code discovery, the extent to which it can actually help in the code discovery problem is still in question. This work focuses on assessing the benefit of using the symbol table information to overcome the limitations of the code discovery problem and identify more or all instructions in x86 binary executable files. We will discuss the details, extent, limitations and impact of instruction discovery with and without symbol table information in this work.



JONATHAN LUTES - SafeExit: Exit Node Protection for TOR

MS Thesis Defense (CS)

When & Where:
May 26, 2015
1:00 pm
2001B Eaton Hall
Committee Members:
Bo Luo, Chair
Arvin Agah
Prasad Kulkarni

Abstract: [ Show / Hide ]
TOR is one of the most important networks for providing anonymity over the internet. However, in some cases its exit node operators open themselves up to various legal challenges, a fact which discourages participation in the network. In this paper, we propose a mechanism for allowing some users to be voluntarily verified by trusted third parties, providing a means by which an exit node can verify that they are not the true source of traffic. This is done by extending TOR’s anonymity model to include
another class of user, and using a web of trust mechanism to create chains of trust.



KAVYASHREE PILAR - Digital Down Conversion and Compression of Radar Data

MS Project Defense (EE)

When & Where:
May 26, 2015
1:00 pm
317 Nichols Hall
Committee Members:
Carl Leuschen, Chair
Shannon Blunt
Glenn Prescott

Abstract: [ Show / Hide ]
Storage and handling of huge amount of received data samples is one of the major challenges associated with Radar system design. Radar data samples potentially have high temporal and spatial correlation depending on the target characteristics and radar settings. This correlation can be utilized to compress them without any loss in sensitivity in post processed products. This project focuses on reducing the storage requirement of a Radar used for remote sensing of ice sheets. At the front-end of Radar receiver, the data sample rate can be reduced at real-time by performing frequency down-conversion and decimation of the incoming data. The decimated signal can be further compressed by applying suitable data compression algorithm. The project implements a digital down-converter, decimator and a data compression module on FPGA. Literature survey suggests that there are quite a few research works being done towards developing customized Radar data compression algorithms. This project analyses the possibility of using general-purpose algorithms like GZIP, JPEG-2000 (lossless) to compress Radar data. It also considers a simple floating point compression technique to convert 16 bit data to 8 bit data, guaranteeing a 50% reduction in data size. The project implements the 16-to-8 bit conversion, JPEG 2000 lossless and GZIP algorithms in Matlab and compares their SNR performance with Radar data. Simulations suggest that all of them have similar SNR performance but JPEG 2000, GZIP algorithms offer a compression ratio of over 90%. However, 16-to-8-bit compression is implemented in this project because of its simplicity.
A hardware test bed is implemented to integrate the digital radar electronics with the Matlab Simulink Simulation tools in a hardware in the loop (HIL) configuration. The digital down converter, decimator and the data compression module are prototyped on SimuLink. The design is later implemented on FPGA using Verilog code. The functionality is tested at various stages of development using ModelSim simulations, Altera DSPBuilder’s HDL import, HIL co-simulation and using SignalTap. This test bed can also be used for future development efforts.



SURYA TEJ NIMMAKAYALA - Exploring Causes of Performance Overhead during Dynamic Binary Translation

MS Thesis Defense (CS)

When & Where:
May 26, 2015
11:00 am
250 Nichols Hall
Committee Members:
Prasad Kulkarni, Chair
Fengjun Li
Bo Luo

Abstract: [ Show / Hide ]
Dynamic Binary Translators (DBT) have applications ranging from program
portability, instrumentation, optimizations, and improving software security. To achieve these goals and maintain control over the application's execution, DBTs translate and run the original source/guest programs in a sand-boxed environment. DBT systems apply several optimization techniques like code caching, trace creation, etc. to reduce the translation overhead and
enhance program performance at run-time. However, even with these
optimizations, DBTs typically impose a significant performance overhead,
especially for short-running applications. This performance penalty has
restricted the more wide-spread adoption of DBT technology, in spite of its obvious need.

The goal of this work is to determine the different factors that contribute to the performance penalty imposed by dynamic binary translators. In this thesis, we describe the experiments that we designed to achieve our goal and report our results and observations. We use a popular and sophisticated DBT, DynamoRio, for our test platform, and employ the industry-standard SPEC CPU2006 benchmarks to capture run-time statistics. Our experiments find that DynamoRio executes a large number of additional instructions when compared to the native application execution. We further measure that this increase in the number of executed instructions is caused by the DBT frequently exiting
the code cache to perform various management tasks at run-time, including
code translation, indirect branch resolution and trace formation. We also
find that the performance loss experienced by the DBT is directly
proportional to the number of code cache exits. We will discuss the details on the experiments, results, observations, and analysis in this work.





Past Defense Notices


XUN WU - A Global Discretization Approach to Handle Numerical Attributes as Preprocessing Presenter

MS Thesis Defense (CS)

When & Where:
May 21, 2015
1:30 pm
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Heechul Yun

Abstract: [ Show / Hide ]
Discretization is a common technique to handle numerical attributes in data mining, and it divides continuous values into several intervals by defining multiple thresholds. Decision tree learning algorithms, such as C4.5 and random forests, are able to deal with numerical attributes by applying discretization technique and transforming them into nominal attributes based on one impurity-based criterion, such as information gain or Gini gain. However, there is no doubt that a considerable amount of distinct values are located in the same interval after discretization, through which digital information delivered by the original continuous values are lost.
In this thesis, we proposed a global discretization method that is able to keep the information within the original numerical attributes by expanding them into multiple nominal ones based on each of the candidate cut-point values. The discretized data set, which includes only nominal attributes, evolves from the original data set. We analyzed the problem by applying two decision tree learning algorithms, namely C4.5 and random forests, respectively to each of the twelve pairs of data sets (original and discretized data sets) and evaluating the performances (prediction accuracy rate) of the obtained classification models in Weka Experimenter. This is followed by two separate Wilcoxon tests (each test for one learning algorithm) to decide whether there is a level of statistical significance among these paired data sets. Results of both tests indicate that there is no clear difference in terms of performances by using the discretized data sets compared to the original ones.



YUFEI CHENG - Future Internet Routing Design for Massive Failures and Attacks

PhD Comprehensive Defense (EE)

When & Where:
May 20, 2015
8:00 am
246 Nichols Hall
Committee Members:
James Sterbenz, Chair
Victor Frost
Fengjun Li
Gary Minden
Deep Medhi
Jiannong Cao
Michael Vitevitch*

Abstract: [ Show / Hide ]
With the increasing frequency of natural disasters and intentional attacks that challenge the optical network, vulnerability to cascading and regional-correlated challenges is escalating. Given the high complexity and large traffic load of the optical networks, the correlated challenges pose great damage to reliable network communication. We start our research by proposing a critical regional identification mechanism and study different vulnerability scales using real-world physical network topologies. We further propose geographical diversity and incorporate it into a new graph resilience metric cTGGD (compensated Total Geographical Graph Diversity), which is capable of characterizing and differentiating resiliency level from different physical networks. We propose path geodiverse problem (PGD) and two heuristics for solving the problem with less complexity compared to the optimal algorithm. The geodiverse paths are optimized with a delay-skew optimization formulation for optimal traffic allocation. We implement GeoDivRP in ns-3 to employ the optimized paths and demonstrate their effectiveness compared to OSPF Equal-Cost Multi-Path routing (ECMP) in terms of both throughput and overall link utilization. As from the attackers perspective, we have analyzed the mechanism by which the attackers could use to maximize the attack impact with a limited budget and demonstrate the effectiveness of different network restoration plans.



DARSHAN RAMESH - Configuration of Routing Protocols on Routers using Quagga

MS Project Defense (EE)

When & Where:
May 19, 2015
2:00 pm
246 Nichols Hall
Committee Members:
Joseph Evans, Chair
Victor Frost
Glenn Prescott

Abstract: [ Show / Hide ]
With the increasing number of devices being connected to the network, efficient connection of those devices to the network is very important. The routing protocols have evolved through time. I have used Mininet and Quagga to implement the routing protocols in a topology with ten routers and eleven host machines. Initially the basic configuration of the routers is done to bring its interfaces administratively up and the IP addresses are assigned. Static routes are configured on the routers using Quagga zebra daemons. With the amount of overhead observed, static protocol is replaced with RIPv2 using the Quagga ripd daemon and the features of RIPv2 are implemented like MD5 authentication and split horizon. RIPv2 is replaced with OSPF routing protocol. The differences between static and dynamic protocol are observed. Complex OSPF applications are implemented using the Quagga ospfd daemon. The best route to the neighboring routers is changed using the OSPF cost attribute. Next the networks in the lab are
assumed to belong to different autonomous systems and BGP is implemented using the Quagga bgpd daemon. The routing updates are filtered using the access list attributes. The path to the neighboring routers is changed using BGP metrics such as MED, weight, AS_PATH and local_pref. Load balancing is also implemented and the results are verified using traceroute and routing tables.



RUXIN XIE - Single-Fiber-Laser-Based Multimodal Coherent Raman System

PhD Comprehensive Defense (EE)

When & Where:
May 18, 2015
10:00 am
250 Nichols Hall
Committee Members:
Ron Hui, Chair
Chris Allen
Shannon Blunt
Victor Frost
Carey Johnson

Abstract: [ Show / Hide ]
Single-fiber-laser-based coherent Raman scattering (CRS) spectroscopy and microscopy system can automatically maintain frequency synchronization between pump and Stokes beam, which dramatically simplifies the setup configuration. The Stokes frequency shift is generated by soliton self-frequency shift (SSFS) through a photonic crystal fiber. The impact of pulse chirping on the signal power reduction of coherent anti-Stokes Raman scattering (CARS) and stimulated Raman scattering (SRS) have been investigate through theoretical analysis and experiment. The strategies of system optimization is discussed.
Our multimodal system provides measurement diversity among CARS, SRS and photothermal, which can be used for comparison and offering complementary information. Distribution of hemoglobin in human red blood cells and lipids in sliced mouse brain sample have been imaged. Frequency and power dependency of photothermal signal is characterized.
Instead of using intensity modulated pump, the polarization switched SRS method is applied to our system by changing the polarization of the pump. Based on the polarization dependency of the third-order susceptibility of the material, this method is able to eliminate the nonresonant photothermal signal from the resonant SRS signal. Red blood cells and sliced mouse brain samples were imaged to demonstrate the capability of the proposed technique. The result shows that polarization switched SRS removes most of the photothermal signal.



VENU GOPAL BOMMU - Performance Analysis of Various Implementations of Machine Learning Algorithms

MS Project Defense (CS)

When & Where:
May 14, 2015
2:00 pm
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, Chair
Luke Huan
Bo Luo

Abstract: [ Show / Hide ]
Rapid development in technologies and database systems result in producing and storing large amounts of data. With such an enormous increase in data over the last few decades, data mining became a useful tool to discover the knowledge hidden in large data. Domain experts often use machine learning algorithms for finding theories that would explain their data.
In this project we compare Weka implementation of CART and C4.5 with their original implementation on different datasets from University of California Irvine (UCI). Comparisons of these implementations has been carried in terms of accuracy, decision tree complexity and area under ROC curve (AUC). Results from our experiments show that the decision tree complexity of C4.5 is much higher than CART and that the original implementation of these algorithms perform slightly better than their corresponding Weka implementation in terms of accuracy and AUC.



SRI HARSHA KOMARINA - System Logging and Analysis using Time Series Databases

MS Project Defense (CS)

When & Where:
May 13, 2015
10:00 am
2001B Eaton Hall
Committee Members:
Joseph Evans, Chair
Prasad Kulkarni
Bo Luo

Abstract: [ Show / Hide ]
Logging system information and its metrics provides us with a valuable resource to monitor the system for unusual activity and understand the various factors affecting its performance. Though there are several tools that are available to log and analyze the system locally, it is inefficient to individually analyze every system and is seldom effective in case of hardware failure. Having centralized access to this information aids the system administrators in performing their operational tasks. Here we present a centralized logging solution for system logs and metrics by using Time Series Databases (TSDB). We provide reliable storage and efficient access to system information by storing the parsed system logs and metrics in a TSDB. In this project, we develop a solution to store the system’s default log storage - syslog as well as the system metrics like CPU load, disk load, and network traffic load into a TSDB. We further extend our ability to monitor and analyze the data in our TSDB by using an open source graphing tool.




EVAN AUSTIN - Theorem Provers as Libraries — An Approach to Formally Verifying Functional Programs

PhD Dissertation Defense (CS)

When & Where:
May 11, 2015
10:00 am
246 Nichols Hall
Committee Members:
Perry Alexander, Chair
Arvin Agah
Andy Gill
Prasad Kulkarni
Erik Van Vleck*

Abstract: [ Show / Hide ]
Property-directed verification of functional programs tends to take one of two paths.
First, is the traditional testing approach, where properties are expressed in the original programming language and checked with a collection of test data.
Alternatively, for those desiring a more rigorous approach, properties can be written and checked with a formal tool; typically, an external proof system.
This dissertation details a hybrid approach that captures the best of both worlds: the formality of a proof system paired with the native integration of an embedded, domain specific language (EDSL) for testing.

At the heart of this hybridization is the titular concept -- \emph{a theorem prover as a library}.
The verification capabilities of this prover, HaskHOL, are introduced to a Haskell development environment as a GHC compiler plugin.
Operating at the compiler level provides for a comparatively simpler integration and allows verification to co-exist with the numerous other passes that stand between source code and program.

The logical connection between language and proof library is formalized, and the open problems related to this connection are documented.
Additionally, the resultant, novel verification workflow is applied to two major classes of problems, type class laws and polymorphic test cases, to judge the real-world feasibility of compiler-directed verification.
These applications and formalization serve to position this work relative to existing work and to highlight potential, future extensions.



CAMERON LEWIS - Ice Shelf Melt Rates and 3D Imaging

PhD Dissertation Defense (EE)

When & Where:
May 8, 2015
9:30 am
317 Nichols Hall
Committee Members:
Prasad Gogineni, Chair
Chris Allen
Carl Leuschen
Fernando Rodriguez-Morales
Rick Hale*

Abstract: [ Show / Hide ]
Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves plays an important role in both the mass balance of the ice sheet and the global climate system. Airborne- and satellite based remote sensing systems can perform thickness measurements of ice shelves. Time separated repeat flight tracks over ice shelves of interest generate data sets that can be used to derive basal melt rates using traditional glaciological techniques. Many previous melt rate studies have relied on surface elevation data gathered by airborne- and satellite based altimeters. These systems infer melt rates by assuming hydrostatic equilibrium, an assumption that may not be accurate, especially near an ice shelf’s grounding line. Moderate bandwidth, VHF, ice penetrating radar has been used to measure ice shelf profiles with relatively coarse resolution. This study presents the application of an ultra wide bandwidth (UWB), UHF, ice penetrating radar to obtain finer resolution data on the ice shelves. These data reveal significant details about the basal interface, including the locations and depth of bottom crevasses and deviations from hydrostatic equilibrium. While our single channel radar provides new insight into ice shelf structure, it only images a small swatch of the shelf, which is assumed to be an average of the total shelf behavior. This study takes an additional step by investigating the application of a 3 D imaging technique to a data set collected using a ground based multi channel version of the UWB radar. The intent is to show that the UWB radar could be capable of providing a wider swath 3 D image of an ice shelf. The 3 D images can then be used to obtain a more complete estimate of bottom melt rates of ice shelves.



RALPH BAIRD - Isomorphic Routing Protocol

MS Thesis Defense (IT)

When & Where:
May 5, 2015
1:00 pm
250 Nichols
Committee Members:
Victor Frost, Chair
Bo Luo
Hossein Saiedian

Abstract: [ Show / Hide ]
A mobile ad-hoc network (MANET) routing algorithm defines the path packets take to reach their destination using measurements of attributes such as adjacency and distance. Graph theory is increasingly applied in many fields of research today to model the properties of data on a graph plane. Graph theory is applied to in networking to form structures from patterns of nodes. Conventional MANET protocols are often based on path measurements from wired network algorithms and do not implement mechanisms to mitigate route entropy, defined as the procession of a converged path to a path loss state as a result of increasing random movement. Graph isomorphism measures equality beginning in the individual node and in sets of nodes and edges. The measurement of isomorphism is applied in this research to form paths from an aggregate set of route inputs, such as adjacency, cardinality to impending nodes in a path, and network width. A routing protocol based on the presence of isomorphism in a MANET topology is then tested to measure the performance of the proposed routing protocol.



DAIN VERMAAK - Application of Half Spaces in Bounding Wireless Internet Signals for use in Indoor Positioning

MS Thesis Defense (CoE)

When & Where:
May 4, 2015
9:30 am
246 Nichols Hall
Committee Members:
Joseph Evans, Chair
Jim Miller
Gary Minden

Abstract: [ Show / Hide ]
The problem of outdoor positioning has been largely solved via the use of GPS. This thesis addresses the problem of determining position in areas where GPS is unavailable. No clear solution exists for indoor localization and all approximation methods offer unique drawbacks. To mitigate the drawbacks, robust systems combine multiple complementary approaches. In this thesis, fusion of wireless internet access points and inertial sensors was used to allow indoor positioning without the need for prior information regarding surroundings. Implementation of the algorithm involved development of three separate systems. The first system simply combines inertial sensors on the Android Nexus 7 to form a step counter capable of providing marginally accurate initial measurements. Having achieved reliable initial measurements, the second system receives signal strength from nearby wireless internet access points, augmenting the sensor data in order to generate half-planes. The half-planes partition the available space and bound the possible region in which each access point can exist. Lastly, the third system addresses the tendency of the step counter to lose accuracy over time by using the recently established positions of the access points to correct flawed values. The resulting process forms a simple feedback loop.



ANDREW FARMER - HERMIT: Mechanized Reasoning during Compilation in the Glasgow Haskell Compiler

PhD Dissertation Defense (CS)

When & Where:
April 30, 2015
9:00 am
250 Nichols
Committee Members:
Andy Gill, Chair
Perry Alexander
Prasad Kulkarni
Jim Miller
Chris Depcik*

Abstract: [ Show / Hide ]
It is difficult to write programs which are both correct and fast. A promising approach, functional programming, is based on the idea of using pure, mathematical functions to construct programs. With effort, it is possible to establish a connection between a specification written in a functional language, which has been proven correct, and a fast implementation, via program transformation.

When practiced in the functional programming community, this style of reasoning is still typically performed by hand, by either modifying the source code or using pen-and-paper. Unfortunately, performing such semi-formal reasoning by directly modifying the source code often obfuscates the program, and pen-and-paper reasoning becomes outdated as the program changes over time. Even so, this semi-formal reasoning prevails because formal reasoning is time-consuming, and requires considerable expertise. Formal reasoning tools often only work for a subset of the target language, or require programs to be implemented in a custom language for reasoning.

This dissertation investigates a solution, called HERMIT, which mechanizes reasoning during compilation. HERMIT can be used to prove properties about programs written in the Haskell functional programming language, or transform them to improve their performance.
Reasoning in HERMIT proceeds in a style familiar to practitioners of pen-and-paper reasoning, and mechanization allows these techniques to be applied to real-world programs with greater confidence. HERMIT can also re-check recorded reasoning steps on subsequent compilations, enforcing a connection with the program as the program is developed.

HERMIT is the first system capable of directly reasoning about the full Haskell language. The design and implementation of HERMIT, motivated both by typical reasoning tasks and HERMIT's place in the Haskell ecosystem, is presented in detail. Three case studies investigate HERMIT's capability to reason in practice. These case studies demonstrate that semi-formal reasoning with HERMIT lowers the barrier to writing programs which are both correct and fast.



JAY McDANIEL - Design, Integration, and Miniaturization of a Multichannel Ultra-Wideband Snow Radar Receiver and Passive Microwave Components

MS Thesis Defense (EE)

When & Where:
April 28, 2015
9:00 am
129 Nichols
Committee Members:
Carl Leuschen, Chair
Stephen Yan, Co-Chair
Prasad Gogineni

Abstract: [ Show / Hide ]
To meet the demand for additional snow characterization from the Intergovernmental Panel on Climate Change (IPCC), a new “Airborne” Multichannel, Quad-Polarized 2-18GHz Snow Radar has been proposed. With tight size and weight constraints from the airborne platforms deploying with the Navy Research Laboratory (NRL), the need for integrated and miniaturized receivers for cost and size reduction is crucial for future deployments.

A set of heterodyne microwave receivers were developed to enable snow thickness measurements from a survey altitude of 500 feet to 5000 feet while nadir looking, and estimation of SWE from polarimetric backscattered signals at low elevation 30 degree off nadir. The individual receiver has undergone a five times size reduction with respect to initial prototype design, while achieving a sensitivity of -125 dBm on average across the 2-18 GHz bandwidth, enabling measurements with a vertical range resolution of 1.64 cm in snow. The design of a compact enclosure was defined to accommodate up to 18 individual receiver modules allowing for multichannel quad-polarized measurements over the entire 16 GHz bandwidth. The receiver bank was tested individually and with the entire system in a full multichannel loop-back measurement, using a 2.95 μs optical delay line, resulting in a beat frequency of 200 MHz with 20 dB range side lobes. Due to the multi-angle, multi-polarization, and multi-frequency content from the data , the number of free parameters in the SWE estimation can thus be significantly reduced.

Design equations have been derived and a new method for modeling Suspended Substrate Stripline (SSS) filters in ADS for rapid-prototyping has been accomplished. Two SSS filters were designed which include an Optimized Chebyshev SSS Low Pass Filter (LPF) with an 18 GHz cutoff frequency and a Broadside Coupled SSS High Pass Filter (HPF) with a 2 GHz cutoff frequency. Also, a 2-18 GHz three-port Transverse Electromagnetic (TEM) Mode Hybrid 8:1 power combiner was designed and modeled at CReSIS. This design will be integrated into the Vivaldi Dual Polarized antenna array with 8 active dual-polarized elements to implement a lightweight and compact array structure, eliminating cable and connector cost and losses.





VADIRAJ HARIBAL - Modelling of ATF-38143 P-HEMT Driven Resistive Mixer for VHF KNG P-150 Portable Radios

MS Project Defense (EE)

When & Where:
April 23, 2015
9:30 am
250 Nichols Hall
Committee Members:
Ron Hui, Chair
Chris Allen
Alessandro Salandrino

Abstract: [ Show / Hide ]
FET resistive mixers play a key role in providing high linearity and low noise figure levels. HEMT technology with low threshold voltage has popularized mobile phone market and milli-meter wave technologies. The project analyzes working of a down-conversion VHF FET resistive mixer model designed using ultra-low noise ATF -38143 P-HEMT. Its widely used in KNG-P150 portable mobile radios manufactured by RELM Wireless Corporation. The mixer is designed to function within RF frequency range from 136Mhz -174Mhz at an IF frequency of 51.50Mhz. Statz model has been used to simulate the working of P-HEMT under normal conditions. Transfer function of matching circuits at each port have been obtained using simulink modelling. Effect of change in Q factor at the RF port and IF port have been considered. Analytical modelling of the mixer is performed and simulated results are compared with experimental data obtained at constant 5dbm LO power. IF transfer function has been modelled to closely match the practical circuits by applying adequate amplitude damping to the response of LC circuits at the RF port, in order to provide the required IF bandwidth and conversion gain. Effect of stray capacitances and inductances have been neglected during the modelling, and changes in series resistance of inductors at RF port and IF port have been made to match experimental results.



MOHAMMED ALENAZI - Network Resilience Improvement and Evaluation Using Link Additions

PhD Dissertation Defense (CS)

When & Where:
April 21, 2015
12:00 pm
246 Nichols Hall
Committee Members:
James Sterbenz, Chair
Victor Frost
Lingjia Liu
Bo Luo
David Tipper
Krzysztof Walkowiak
Tyrone Duncan*

Abstract: [ Show / Hide ]
Computer networks are getting more involved in providing services for most of our daily life activities related to education, business, health care, social life, and government. Publicly available computer networks are prone to targeted attacks and natural disasters that could disrupt normal operation and services. Building highly resilient networks is an important aspect of their design and implementation. For existing networks, resilience against such challenges can be improved by adding more links. In fact, adding links to form a full mesh yields the most resilient network but it incurs an unfeasible high cost. In this research, we investigate the resilience improvement of real-world networks via adding a cost-efficient set of links. Adding a set of links to obtain optimal solution using an exhaustive search is impracticable for large networks. Using a greedy algorithm, a feasible solution is obtained by adding a set of links to improve network connectivity by increasing a graph robustness metric such as algebraic connectivity or total path diversity. We use a graph metric called flow robustness as a measure for network resilience. To evaluate the improved networks, we apply three centrality-based attacks and study their resilience. The flow robustness results of the attacks show that the improved networks are more resilient than the non-improved networks.




WENRONG ZENG - Content-Based Access Control

PhD Dissertation Defense (CS)

When & Where:
April 3, 2015
1:00 pm
250 Nichols Hall
Committee Members:
Bo Luo, Chair
Arvin Agah
Jerzy Grzymala-Busse
Prasad Kulkarni
Alfred Tat-Kei*

Abstract: [ Show / Hide ]
In conventional database, the most popular access control model specifies policies explicitly for each role of every user against each data object manually. Nowadays, in large-scale content-centric data sharing, conventional approaches could be impractical due to exponential explosion of the data growth and the sensitivity of data objects. What’s more, conventional database access control policy will not be functional when the semantic content of data is expected to play a role in access decisions. Users are often over-privileged, and ex post facto auditing is enforced to detect misuse of the privileges. Unfortunately, it is usually difficult to reverse the damage, as (large amount of) data has been disclosed already. In this dissertation, we first introduce Content-Based Access Control (CBAC), an innovative access control model for content-centric information sharing. As a complement to conventional access control models, the CBAC model makes access control decisions based on the content similarity between user credentials and data content automatically. In CBAC, each user is allowed by a meta-rule to access "a subset" of the designated data objects of a content-centric database, while the boundary of the subset is dynamically determined by the textual content of data objects. We then present an enforcement mechanism for CBAC that exploits Oracles Virtual Private Database (VPD) to implement a row-wise access control and to prevent data objects from being abused by unneccessary access admission. To further improve the performance of the proposed approach, we introduce a content-based blocking mechanism to improve the efficiency of CBAC enforcement to further reveal a more relavant part of the data objects comparing with only using the user credentials and data content. We also utilized several tagging mechanisms for more accurate textual content matching for short text snippets (e.g. short VarChar attributes) to extract topics other than pure word occurences to represent the content of data. In the tagging mechanism, the similarity of content is calculated not purely dependent on the word occurences but the semantic topics underneath the text content. Experimental results show that CBAC makes accurate access control decisions with a small overhead.



RANJITH KRISHNAN - The Xen Hypervisor : Construction of a Test Environment and Validation by Performing Performance Evaluation of Native Linux versus Xen Guests

MS Project Defense (CS)

When & Where:
March 30, 2015
3:00 pm
246 Nichols Hall
Committee Members:
Prasad Kulkarni, Chair
Bo Luo
Heechul Yun

Abstract: [ Show / Hide ]
Modern computers are powerful enough to comfortably support running multiple Operating Systems at the same time. Enabling this is the Xen hypervisor, an open-source tool which is one of most widely used System Virtualization solutions in the market. Xen enables Guest Virtual Machines to run at near native speeds by using a concept called Paravirtualization. The primary goal of this project is to construct a development/test environment where we can investigate the different types of virtualization Xen supports. We start on a base of Fedora onto which Xen is built and installed. Once Xen is running, we configure both Paravirtualized and Hardware Virtualized Guests.
The second goal of the project is to validate the environment constructed by doing a performance evaluation of constructed test environment. Various performance benchmarks are run on native Linux, Xen Host and the two important types of Xen Guests. As expected, our results show that the performance of the Xen guest machines are close to native Linux. We also see proof of why virtualization-aware Paravirtualization performs better than Hardware Virtualization which runs without any knowledge of the underlying virtualization infrastructure.




JUSTIN METCALF - Signal Processing for Non-Gaussian Statistics: Clutter Distribution Identification and Adaptive Threshold Estimation

PhD Dissertation Defense (EE)

When & Where:
March 26, 2015
10:30 am
129 Nichols
Committee Members:
Shannon Blunt, Chair
Luke Huan
Lingjia Liu
Jim Stiles
Tyrone Duncan*

Abstract: [ Show / Hide ]
We examine the problem of determining a decision threshold for the binary hypothesis test that naturally arises when a radar system must decide if there is a target present in a range cell under test. Modern radar systems require predictable, low, constant rates of false alarm (i.e. when unwanted noise and clutter returns are mistaken for a target). Measured clutter returns have often been fitted to heavy tailed, non-Gaussian distributions. The heavy tails on these distributions cause an unacceptable rise in the number of false alarms. We use the class of spherically invariant random vectors (SIRVs) to model clutter returns. SIRVs arise from a phenomenological consideration of the radar sensing problem, and include both the Gaussian distribution and most commonly reported non-Gaussian clutter distributions (e.g. K distribution, Weibull distribution).

We propose an extension of a prior technique called the Ozturk algorithm. The Ozturk algorithm generates a graphical library of points corresponding to known SIRV distributions. These points are generated from linked vectors whose magnitude is derived from the order statistics of the SIRV distributions. Measured data is then compared to the library and a distribution is chosen that best approximates the measured data. Our extension introduces a framework of weighting functions and examines both a distribution classification technique as well as a method of determining an adaptive threshold in data that may or may not belong to a known distribution. The extensions are then compared to neural networking techniques. Special attention is paid to producing a robust, adaptive estimation of the detection threshold. Finally, divergence measures of SIRVs are examined.




ALHANOOF ALTHNIAN - Evolutionary Learning of Goal-Oriented Communication Strategies in Multi-Agent Systems

PhD Comprehensive Defense (CS)

When & Where:
March 13, 2015
2:00 pm
246 Nichols Hall
Committee Members:
Arvin Agah, Chair
Jerzy Grzymala-Busse
Prasad Kulkarni
Bo Luo
Sarah Kieweg*

Abstract: [ Show / Hide ]
Multi-agent systems are a common paradigm for building distributed systems in different domains such as networking, health care, swarm sensing, robotics, and transportation. Performance goals can vary from one application to the other according to the domain's specifications and requirements. Yet, performance goals can vary over the course of task execution. For example, agents may initially be interested in completing the task as fast as possible, but if their energy hits a specific level while still working on the task, they might, then need to switch their goal to minimize energy consumption. Previous studies in multi-agent systems have observed that varying the type of information that agents communicate, such as goals and beliefs, has a significant impact on the performance of the system with respect to different, usually conflicting, performance metrics, such as speed of solution, communication efficiency, and travel distance/cost. Therefore, when designing a communication strategy for a multi-agent system, it is unlikely that one strategy can perform well with respect to all of performance metrics. Yet, it is not clear in advance, which strategy or communication decisions will be the best with respect to each metric. Previous approaches to communication decisions in multi-agent systems either manually design a single/multiple fixed communication strategies, extend agents' capabilities and use heuristics, or allow learning a strategy with respect to a single predetermined performance goal. To address this issue, this research introduces goal-oriented communication strategy, where communication decisions are determined based on the desired performance goal. This work proposes an evolutionary approach for learning a goal-oriented communication strategy in multi-agent systems. The approach enables learning an effective communication strategy with respect to simple or complex measurable performance goals. The learned strategy will determine what, when, and to whom the information should be communicated during the course of task execution.



JASON GEVARGIZIAN - Executables from Program Slices for Java Programs

MS Thesis Defense (CS)

When & Where:
February 13, 2015
11:00 am
250 Nichols Hall
Committee Members:
Prasad Kulkarni, Chair
Perry Alexander
Andy Gill

Abstract: [ Show / Hide ]
Program slicing is a popular program decomposition and analysis technique
that extracts only those program statements that are relevant to particular points
of interest. Executable slices are program slices that are independently executable
and that correctly compute the values in the slicing criteria. Executable slices
can be used during debugging and to improve program performance through
parallelization of partially overlapping slices.

While program slicing and the construction of executable slicers has been
studied in the past, there are few acceptable executable slicers available,
even for popular languages such as Java.
In this work, we provide an extension to the T. J. Watson Libraries for
Analysis (WALA), an open-source Java application static analysis suite, to
generate fully executable slices.

We analyze the problem of executable slice generation in the context
of the capabilities provided and algorithms used by the WALA library.
We then employ this understanding to augment the existing WALA static SSA slicer
to efficiently track non-SSA datapendence, and couple this component with
our exectuable slicer backend.
We evaluate our slicer extension and find that it produces accurate
exectuable slices for all programs that fall within the limitations of the
WALA SSA slicer itself.
Our extension to generate executable program slices facilitates one of the
requirements of our larger project for a Java application automatic
partitioner and parallelizer.



DAVID HARVIE - Targeted Scrum: Software Development Inspired by Mission Command

PhD Dissertation Defense (CS)

When & Where:
February 12, 2015
2:30 pm
246 Nichols Hall
Committee Members:
Arvin Agah, Chair
Bo Luo
James Miller
Hossein Saiedian
Prajna Dhar*

Abstract: [ Show / Hide ]
Software engineering and mission command are two separate but similar fields, as both are instances of complex problem solving in environments with ever changing requirements. Both fields have followed similar paths from using industrial age decomposition to deal with large problems to striving to be more agile and resilient. Our research hypothesis is that modifications to agile software development based on inspirations from mission command can improve the software engineering process in terms of planning, prioritizing, and communication of software requirements and progress, as well as improving the overall software product. Targeted Scrum is a modification of Traditional Scrum based on three inspirations from Mission Command: End State, Line of Effort, and Targeting. These inspirations have led to the introduction of the Product Design Meeting and modifications of some current Scrum meetings and artifacts. We tested our research hypothesis using a semester-long undergraduate level software engineering class. Students in teams developed two software projects, one using Traditional Scrum and the other using Targeted Scrum. We then assessed how well both methodologies assisted the software development teams in planning and developing the software architecture, prioritizing requirements, and communicating progress. We also evaluated the software product produced by both methodologies. It was determined that Targeted Scrum did better in assisting the software development teams in the planning and prioritization of the requirements. However, Targeted Scrum had a negligible effect on improving the software development teams’ external and internal communications. Finally, Targeted Scrum did not have an impact on the product quality by the top performing and worst performing teams. Targeted Scrum did assist the product quality of the teams in the middle of the performance spectrum.



BRAD TORRENCE - The Life Changing HERMIT: A Case Study of the Worker/Wrapper Transformation

MS Thesis Defense (CoE)

When & Where:
January 30, 2015
2:00 pm
2001B Eaton Hall
Committee Members:
Andy Gill, Chair
Perry Alexander
Prasad Kulkarni

Abstract: [ Show / Hide ]
In software engineering, altering a program's original implementation disconnects it from the model that produced it. Reconnecting the model and new implementations must be done in a way that does not decrease confidence in the design's correctness and performance. This thesis demonstrates that it is possible, in practice, to connect the model of Conway’s Game of Life with new implementations, using the worker/wrapper transformation theory. This connection allows development to continue without the sacrifice of re-implementation.

HERMIT is a tool that allows programs implemented in Haskell to be transformed during the compilation process, and has features capable of performing worker/wrapper transformations. Specifically in these experiments, HERMIT is used to apply syntax transformations to replace Life's linked-list based implementation with one that uses other data structures in an effort to explore alternative implementations and improve overall performance.

Previous work has successfully performed the worker/wrapper conversion on an individual function using HERMIT. This thesis presents the first time that a programmer-directed worker/wrapper transformation has been attempted on an entire program. From this experiment, substantial observations have been made. These observations have led to proposed improvements to the HERMIT system, as well as a formal approach to the worker/wrapper transformation process in general.




RAMA KRISHNAMOORTHY - Adding Collision Detection to Functional Active Programming

MS Project Defense (CS)

When & Where:
January 28, 2015
10:00 am
2001B Eaton Hall
Committee Members:
Andy Gill, Chair
Luke Huan
Prasad Kulkarni

Abstract: [ Show / Hide ]
Active is a Haskell library for creating animations driven by time. The key concept is that every animation has its own starting and ending time and the motion of each element can be defined as a function of time. This underlying idea is intuitive and simple enough for the users to understand that it has created a space for simple animations, called “Functional Active programming”. Although there are many FRP libraries available, FRP libraries are often challenging to use for simple animations.
In this project, we have added some reactive features to the Active library as an attempt to enhance the active programming space without complicating the underlying principles. This will let Active elements to detect collisions, or a mouse click event, and change their behavior accordingly. Having built-in reactive features equips the Active programmers with extra tools at their disposal and significantly reduces the efforts needed to code such reactions. These reactive features have been implemented on top of the Blank Canvas.



MAHMOOD HAMEED - Nonlinear Mixing in Optical Multicarrier Systems

PhD Comprehensive Defense (EE)

When & Where:
January 14, 2015
9:00 am
246 Nichols Hall
Committee Members:
Ron Hui, Chair
Shannon Blunt
Erik Perrins
Alessandro Salandrino
Tyrone Duncan*

Abstract: [ Show / Hide ]
Efficient use of the vast spectrum offered by fiber-optic links by an end user with relatively small bandwidth requirement is possible by partitioning a high speed signal in a wavelength channel into multiple low-rate subcarriers. Multi-carrier systems not only ensure optimized use of optical and electrical components, but also tolerate transmission impairments. The purpose of this research is to theoretically and experimentally study mixing among subcarriers in Radio-Over-Fiber (RoF) and direct detections systems.
For an OFDM-RoF system, we present a novel technique that minimizes the RF domain signal-signal beat interference, relaxes the phase noise requirement on the RF carrier, realizes the full potential of the optical heterodyne technique, and increases the performance-to-cost ratio of RoF systems. We demonstrate a RoF network that shares the same RF carrier for both downlink and uplink, avoiding the need of an additional RF oscillator in the customer unit.
For direct detection systems, we propose theoretical and experimental investigation of impact of semiconductor optical amplifier nonlinearities on Compatible-SSB signals. As preliminary work, we present experimental comparison of performance degradation of coherent optical OFDM and single carrier Nyquist pulse modulated systems in a nonlinear environment. Furthermore, analysis of distribution properties of optical phases driving a dual-drive MZM and their dependence on scaling factor are proposed for Compatible-SSB modulation format through simulations and experimental results. An optimum scaling factor needs to be found that minimizes residual sideband and signal-signal beat interference in such systems.



JAY FULLER - Scalable, Synchronous, Multichannel DDS System for Radar Applications

MS Thesis Defense (EE)

When & Where:
January 12, 2015
1:00 pm
129 Nichols
Committee Members:
Carl Leuschen, Chair
Prasad Gogineni
Fernando Rodriguez-Morales
Zongbo Wang

Abstract: [ Show / Hide ]
The WFG2013 project uses Analog Devices AD9915 DDS ICs at up to 2.5 GS/s as basic building blocks for a scalable,synchronous, multichannel DDS system. Four DDS ICs are installed on a daughterboard with an Altera Cyclone 5E FPGA as a controller. The daughterboard can run standalone (Solo), in conjunction with another daughterboard (Duo), or N daughterboards surfing a motherboard (Mucho).

Synchronization between configured DDS ICs is achieved via the on-chip SYNC-IN and SYNC-OUT signals. The master DDS (only one per configuration) generates the SYNC_OUT signal, which is distributed to the SYNC_IN pins on all DDS ICs, including the master. The synchronization signal distribution network was designed to minimize skew such that the SYNC_IN signal reaches the all DDSs at virtually the same time. Even if some skew appears, the AD9915's SYNC_IN and SYNC_OUT signals have adjustable delay. The SYNC_IN signal causes the DDSs to assume a known state. Because all of the DDSs reach the same state at the same time, they are, by definition synchronized.