Skip redundant pieces

Electrical Engineering and Computer Science

Defense Notices

EECS MS and PhD Defense Notices for

All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.




Upcoming Defense Notices


ANSU JOYS - Identifying Software Phase Markers in Java Byte Code

MS Project Defense (CS)

When & Where:
September 9, 2014
2:00 pm
250 Nichols Hall
Committee Members:
Prasad Kulkarni, Chair
Andy Gill
Bo Luo

Abstract: [ Show / Hide ]
Program execution can be classified into phases. These phases can be repeated during a single execution of the application. Ability to identify and classify the phases statically will help prepare the system early for the next phase, which can benefit overall program performance at run time. While static program phase detection algorithms have been explored for binary executable with promising results, to the best of our knowledge, such algorithms have not been targeted and evaluated for managed language, specifically Java, programs. Accurate detection of future program phases can allow the Java virtual machine to perform phase-specific optimizations to improve performance.

In this project, we build a framework to detect program phases and insert software phase markers in Java byte code. We employ an existing algorithm to detect program phases and adapt it to detect phases for Java binaries. We modify the control flow graph generated by the byte code analysis tool WALA (Watson library for Analysis) to integrate program loops. We analyze each method in the control flow graph produced by WALA to detect loops, and convert the call graph into a "call loop graph". We then rely on program profiling to provide data on the number of times each basic block and edge is reached at run-time. We use this profiling information to determine the average number of instructions executed along each graph edge, the average number of times an edge is executed and the standard deviation for instructions on these edges in our algorithm to identify the software phase markers.




LOGAN SMITH - Validation of CReSIS Synthetic Aperture Radar Processor and Optimal Processing Parameters

MS Thesis Defense (EE)

When & Where:
September 9, 2014
10:30 am
317 Nichols Hall
Committee Members:
Carl Leuschen, Chair
Prasad Gogineni
John Paden

Abstract: [ Show / Hide ]
Sounding the ice sheets of Greenland and Antarctica is a vital component in determining the affect of global warming on sea level rise. Of particular importance to measure are the outlet glaciers that transport ice from the interior to the edge of the ice sheet. These outlet glaciers are difficult to sound due to crevassing caused by the relatively fast movement of the ice in the glacial channel and higher signal attenuation caused by warmer ice. The Center for Remote Sensing of Ice Sheets (CReSIS) uses multi-channel airborne radars with methods for achieving better resolution and signal-to-noise ratio (SNR) in the three major dimension to sound outlet glaciers. Synthetic aperture radar (SAR) techniques are used in the along-track dimension, pulse compression in the range dimension, and an antenna array in the cross-track dimension.

CReSIS has developed a SAR processor to effectively and efficiently process the data collected by these radars in each dimension. To validate the performance of this processor a SAR simulator was developed with the functionality to test multiple aspects of the SAR processor. In addition to the implementation of this simulator for validation of processing the data in the along-track, cross-track and range dimensions, there are a number of data-dependent processing steps that can affect the quality of the final data product. These include creating matched filters for each dimension of the data, removing phase and amplitude differences between receive channels, and determining the optimal along-track beamwidth to use for processing the data. All of these factors can improve the ability to obtain the maximum amount of information from the collected data. The validation and optimal processing parameters and their theory are discussed here.



ADITYA BALASUBRAMANIAN - Study and Performance Analysis of OFDM using GNURadio and USRP

MS Project Defense (CoE)

When & Where:
September 9, 2014
9:00 am
250 Nichols Hall
Committee Members:
Lingjia Liu, Chair
Joe Evans
James Sterbenz

Abstract: [ Show / Hide ]
Software defined radios (SDR) are a rapidly evolving technology which are used widely in industry and academia today. They offer a very low cost and flexible alternative for implementing and testing wireless technologies since most of the physical layer functionalities are implemented in
software instead of hardware. Universal Software Defined Radio Peripheral (USRP) is one of the most popular products belong to the family of SDR. GNURadio, a software development kit comprising of C++ and Python libraries is widely used with USRP as a hardware platform to create SDR applications.
In this project a tested is implemented for performance analysis of an OFDM communication system using GNURadio and USRP. The performance is analyzed and studied in a practical laboratory environment using GNURadio and USRP. The packet error rate versus SNR is calculated in different
environmental settings .The effect of Interference and obstruction is also taken into account in studying the performance.



H. SHANKER RAO - Dominant Attribute and Multiple Scanning Approaches for Discretization of Numerical Attributes

MS Thesis Defense (CS)

When & Where:
September 8, 2014
2:30 pm
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, Chair
Perry Alexander
Doina Caragea

Abstract: [ Show / Hide ]
Rapid development of high throughput technologies and database management systems has made it possible to produce and store large amount of data. However, making sense of big data and discovering knowledge from it is a compounding challenge. Generally, data mining techniques search for information in datasets and express gained knowledge in the form of trends, regularities, patterns or rules. Rules are frequently identified automatically by a technique called rule induction, which is the most important technique in data mining and machine learning and it was developed primarily to handle symbolic data. However, real life data often contain numerical attributes and therefore, in order to fully utilize the power of rule induction techniques, an essential preprocessing step of converting numeric data into symbolic data called discretization is employed in data mining.
Here we present two entropy based discretization techniques known as dominant attribute approach and multiple scanning approach, respectively. These approaches were implemented as two explicit algorithms in a JAVA programming language and experiments were conducted by applying each algorithm separately on seventeen well known numerical data sets. The resulting discretized data sets were used for rule induction by LEM2 or Learning from Examples Module 2 algorithm. For each dataset in multiple scanning approach, experiments were repeated with incremental scans until interval counts were stabilized. Preliminary results from this study indicated that multiple scanning approach performed better than dominant attribute approach in terms of producing comparatively smaller and simpler rule sets.





Past Defense Notices


YI ZHU - Matrix and Tensor-based ESPRIT Algorithm for Joint Angle and Delay Estimation in 2D Active Massive MIMO Systems and Analysis of Direction of Arrival Estimation Algorithms for Basal Ice Sheet Tomography

MS Thesis Defense (EE)

When & Where:
August 6, 2014
9:00 am
246 Nichols Hall
Committee Members:
Lingjia Liu, Chair
Shannon Blunt
John Paden
Erik Perrins

Abstract: [ Show / Hide ]
In this thesis, we apply and analyze three direction of arrival (DoA) algorithms to tackle two distinct problems: one belongs to wireless communication, the other to radar signal processing. Though the essence of these two problems is DoA estimation, their formulation, underlying assumptions, application scenario, etc. are totally different. Hence, we write them separately, with ESPRIT algorithm the focus of Part I and MUSIC and MLE detailed in Part II.

For wireless communication scenario, mobile data traffic is expected to have an exponential growth in the future. In “massive MIMO” systems, a base station will rely on the uplink sounding signals from mobile stations to figure out the spatial information to perform MIMO beamforming. Accordingly, multi-dimensional parameter estimation of a ray-based multipath wireless channel becomes crucial for such systems to realize the predicted capacity gains. We study joint angle and delay estimation for such system and results suggest that the dimension of the antenna array at the base station plays an important role in determining the estimation performance. These insights will be useful for designing practical “massive MIMO” systems in future mobile wireless communications.

For the problem of radar sensing of ice sheet topography, one of the key requirements for deriving more realistic ice sheet models is to obtain a good set of basal measurements that enables accurate estimation of bed roughness and conditions. For this purpose, 3D tomography of the ice bed has been successfully implemented with the help of DoA. The SAR focused datasets provide a good case study. For the antenna array geometry and sample support used in our tomographic application, MUSIC performs better originally using a cross-over analysis where the estimated topography from crossing flight lines are compared for consistency. However, after several improvements applied to MLE, MLE outperforms MUSIC. We observe that, the spatial bottom smoothing, aiming to remove the artifacts made by MLE algorithm, is the most essential step in the post-processing procedure. The 3D tomography we obtained lays a good foundation for further analysis and modeling of ice sheets.



YUHAO YANG - Protecting Attributes and Contents in Online Social Networks

PhD Dissertation Defense (CS)

When & Where:
August 1, 2014
1:00 pm
250 Nichols Hall
Committee Members:
Bo Luo, Chair
Arvin Agah
Luke Huan
Prasad Kulkarni
Alfred Ho*

Abstract: [ Show / Hide ]
With the fast development of computer and information technologies, online social networks grow dramatically. While huge amount of information is distributed expeditiously in online social networking sites, privacy concerns arise.
In this dissertation, we first study the vulnerabilities of user attributes and contents, in particular, the identifiability of the users when the adversary learns a small piece of information about the target. We further employ an information theory based approach to quantitatively evaluate the threats of attribute-based re-identification. We have shown that large portions of users with online presence are highly identifiable.
The notion of privacy as control and information boundary has been introduced by the user-oriented privacy research community, and partly adopted in commercial social networking platforms. However, such functions are not widely accepted by the users, mainly because it is tedious and labor-intensive to manually assign friends into such circles. To tackle this problem, we introduce a social circle discovery approach using multi-view clustering. We present our observations on the key features of social circles, including friendship links, content similarity and social interactions. We treat each feature as one view, and propose a one-side co-trained spectral clustering technique, which is tailored for the sparse nature of our data. We evaluate our approach on real-world online social network data, and show that the proposed approach significantly outperforms structure-based clustering. Finally, we build a proof-of-concept demonstration of the automatic circle detection and recommendation approaches.



JAMUNA GOPAL - I Know Your Family: An Hybrid Information Retrieval Approach to Extract Family Information

MS Thesis Defense (CS)

When & Where:
July 22, 2014
8:00 am
250 Nichols Hall
Committee Members:
Bo Luo, Chair
Jerzy Grzymala-Busse
Prasad Kulkarni

Abstract: [ Show / Hide ]
The aim of this project is to identify the family related information of a person from their Twitter Data. We use their personal details, tweets and their friends’ details in order to achieve this. Since, we deal with the modern world short text data; we have used a hybrid information retrieval methodology taking into account the Parts of Speech of the data, Phrase Similarity and the Semantic Similarity of the data along with the openly available twitter data. The future use of this research is to develop a Client Side protection tool that will help users validate the data to be posted for privacy breech.



KAIGE YAN - Power and Performance Co-optimization for Emerging Mobile Platforms

MS Project Defense (CS)

When & Where:
July 21, 2014
4:00 pm
250 Nichols Hall
Committee Members:
Xin Fu, Chair
Prasad Kulkarni
Heechul Yun

Abstract: [ Show / Hide ]
The mobile devices emerge as the most popular computing platform since 2011. Different from the traditional PC, the mobile devices are more power-constraint and performance-sensitive due to its size. In order to reduce the power consumption and improve the performance, we focus on the Last Level Cache (LLC), which is the power-hungry structure and critical to the performance in mobile platforms. In this project, we first integrate the McPAT power model into the Gem5 simulator. We also introduce the emerging memory technologies, such as Sprin-Transfer Torque RAM (STT-RAM) and embedded DRAM (eDRAM), into the cache design and compare their power and performance effectiveness with the conventional SRAM-based cache. Additionally, we identify that the frequent execution switch between the kernel and user code is the major reason for the high LLC miss in mobile applications. This is because blocks belonging to kernel and user space have severe interferences. We further propose static and dynamic way partition schemes to separate the cache blocks from kernel and user space. The experiment results show promising power reduction and performance improvement with our proposed techniques.



MICHAEL JANTZ - Exploring Dynamic Compilation and Cross-Layer Object Management Policies for Managed Language Applications

PhD Dissertation Defense (CS)

When & Where:
July 21, 2014
1:30 pm
246 Nichols Hall
Committee Members:
Prasad Kulkarni, Chair
Xin Fu
Andy Gill
Bo Luo
Karen Nordheden*

Abstract: [ Show / Hide ]
Recent years have witnessed the widespread adoption of managed programming languages that are designed to execute on virtual machines. Virtual machine architectures provide several powerful software engineering advantages over statically compiled binaries, such as portable program represenations, additional safety guarantees, and automatic memory and thread management, which have largely driven their success. To support and facilitate the use of these features, virtual machines implement a number of services that adaptively manage and optimize application behavior during execution. Such runtime services often require tradeoffs between efficiency and effectiveness, and different policies can have major implications on the system's performance and energy requirements.

In this work, we extensively explore policies for the two runtime services that are most important for achieving performance and energy efficiency: dynamic (or Just-In-Time (JIT)) compilation and memory management. First, we examine the properties of single-tier and multi-tier JIT compilation policies in order to find strategies that realize the best program performance for existing and future machines. We perform hundreds of experiments with different compiler aggressiveness and optimization levels to evaluate the performance impact of varying if and when methods are compiled. Next, we investigate the issue of how to optimize program regions to maximize performance in JIT compilation environments. For this study, we conduct a thorough analysis of the behavior of optimization phases in our dynamic compiler, and construct a custom experimental framework to determine the performance limits of phase selection during dynamic compilation. Lastly, we explore innovative memory management strategies to improve energy efficiency in the memory subsystem. We propose and develop a novel cross-layer approach to memory management that integrates information and analysis in the VM with fine-grained management of memory resources in the operating system. Using custom as well as standard benchmark workloads, we perform detailed evaluation that demonstrates the energy-saving potential of our approach.




JINGWEIJIA TAN - Modeling and Improving the GPGPU Reliability in the Presence of Soft Errors

MS Project Defense (CS)

When & Where:
July 21, 2014
9:00 am
250 Nichols Hall
Committee Members:
Xin Fu, Chair
Prasad Kulkarni
Heechul Yun

Abstract: [ Show / Hide ]
GPGPUs (general-purpose computing on graphics processing units) emerge as a highly attractive platform for HPC (high performance computing) applications due to its strong computing power. Unlike the graphic processing applications, HPC applications have rigorous requirement on execution correctness, which is generally ignored in the traditional GPU design. Soft Errors, which are failures caused by high-energy neutron or alpha particle strikes in integrated circuits, become a major reliability concern due to the shrinking of feature sizes and growing integration density. In this project, we first build a framework GPGPU-SODA to model the soft-error vulnerability of GPGPU microarchitecture using a publicly available simulator. Based on the framework, we identified the streaming processors are reliability hot-spot in GPGPUs. We further observe that the streaming processors are not fully utilized during the branch divergence and pipeline stalls caused by the long latency operations. We then propose a technique RISE to recycle the streaming processors idle time for soft-error detection in GPGPUs. Experimental results show that RISE obtains the good fault coverage with negligible performance degradation.



KARTHIK PODUVAL - HGS Schedulers for Digital Audio Workstation like Applications

MS Thesis Defense (CoE)

When & Where:
July 14, 2014
10:00 am
246 Nichols Hall
Committee Members:
Prasad Kulkarni, Chair
Victor Frost
Jim Miller

Abstract: [ Show / Hide ]
Digital Audio Workstation (DAW) applications are real-time applications that have special timing constraints. HGS is a real-time scheduling framework that allows developers implement custom schedulers based on any scheduling algorithm through a process of direct interaction between client threads and their schedulers. Such scheduling could extend well beyond the common priority model that currently exists and could be a representation of arbitrary application semantics that can be well understood and acted upon by its associated scheduler. We like to term it "need based scheduling". In this thesis we firstly study some DAW implementations and later create a few different HGS schedulers aimed at assisting DAW applications meet their needs.




NEIZA TORRICO PANDO - High Precision Ultrasound Range Measurement System

MS Thesis Defense (EE)

When & Where:
June 25, 2014
11:00 am
2001B Eaton Hall
Committee Members:
Chris Allen, Chair
Swapan Chakrabarti
Ron Hui

Abstract: [ Show / Hide ]
Real-time, precise range measurement between objects is useful for a variety of applications. The slow propagation of acoustic signals (330 m/s) in air makes the use of ultrasound frequencies an ideal approach to measure an accurate time of flight. The time of flight can then be used to calculate the range between two objects. The objective of this project is to achieve a precise range measurement within 10 cm uncertainty and an update rate of 30 ms for distances up to 10 m between unmanned aerial vehicles (UAVs) when flying in formation. Both transmitter and receiver are synchronized with a 1 pulse per second signal coming from a GPS. The time of flight is calculated using the cross-correlation of the transmitted and received waves. To allow for various users, a 40 kHz signal is phase modulated with Gold or Kasami codes.



CAMERON LEWIS - 3D Imaging of Ice Sheets

PhD Comprehensive Defense (EE)

When & Where:
June 10, 2014
3:00 pm
317 Nichols Hall
Committee Members:
Prasad Gogineni, Chair
Chris Allen
Carl Leuschen
Fernando Rodriguez-Morales
Rick Hale*

Abstract: [ Show / Hide ]
Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves affect both the mass balance of the ice sheet and the global climate system. This melting and refreezing influences the development of Antarctic Bottom Water, which help drive the oceanic thermohaline circulation, a critical component of the global climate system. Basal melt rates can be estimated through traditional glaciological techniques, relying on conversation of mass. However, this requires accurate knowledge of the ice movement, surface accumulation and ablation, and firn compression. Boreholes can provide direct measurement of melt rates, but only provide point estimates and are difficult and expensive to perform. Satellite altimetry measurements have been heavily relied upon for the past few decades. Thickness and melt rate estimates require the same conservation of mass a priori knowledge, with the additional assumption that the ice shelf is in hydrostatic equilibrium. Even with newly available, ground truthed density and geoid estimates, satellite data derived ice shelf thickness and melt rate estimates suffers from relatively course spatial resolution and interpolation induced error. Non destructive radio echo sounding (RES) measurements from long range airborne platforms provide best solution for fine spatial and temporal resolution over long survey traverses and only require a priori knowledge of firn density and surface accumulation. Previously, RES data derived basal melt rate experiments have been limited to ground based experiments with poor coverage and spatial resolution. To improve upon this, an airborne multi channel wideband radar has been developed for the purpose of imaging shallow ice and ice shelves. A moving platform and cross track antenna array will allow for fine resolution 3 D imaging of basal topography. An initial experiment will use a ground based system to image shallow ice and generate 3 D imagery as a proof of concept. This will then be applied to ice shelf data collected by an airborne system.



TRUC ANH NGUYEN - Transfer Control for Resilient End-to-End Transport

MS Thesis Defense (CS)

When & Where:
June 5, 2014
9:00 am
246 Nichols Hall
Committee Members:
James Sterbenz
Victor Frost
Gary Minden

Abstract: [ Show / Hide ]
Residing between the network layer and the application layer, the transport
layer exchanges application data using the services provided by the network. Given the unreliable nature of the underlying network, reliable data transfer has become one of the key requirements for those transport-layer protocols such as TCP. Studying the various mechanisms developed for TCP to increase the correctness of data transmission while fully utilizing the network's bandwidth provides us a strong background for our study and development of our own resilient end-to-end transport protocol. Given this motivation, in this thesis, we study the dierent
TCP's error control and congestion control techniques by simulating them under dierent network scenarios using ns-3. For error control, we narrow our research to acknowledgement methods such as cumulative ACK - the traditional TCP's way of ACKing, SACK, NAK, and SNACK. The congestion control analysis covers some TCP variants including Tahoe, Reno, NewReno, Vegas, Westwood, Westwood+, and TCP SACK.



CENK SAHIN - On Fundamental Performance Limits of Delay-Sensitive Wireless Communications

PhD Comprehensive Defense (EE)

When & Where:
June 2, 2014
10:00 am
246 Nichols Hall
Committee Members:
Erik Perrins, Chair
Shannon Blunt
Victor Frost
Lingjia Liu
Zsolt Talata*

Abstract: [ Show / Hide ]
Mobile traffic is expected to grow at an annual compound rate of 66% in the next 3 years, while among the data types that account for this growth mobile video has the highest growth rate. Since most video applications are delay-sensitive, the delay-sensitive traffic will be the dominant traffic over future wireless communications. Consequently, future mobile wireless systems will face the dual challenge of supporting large traffic volume while providing reliable service for various kinds of delay-sensitive applications (e.g. real-time video, online gaming, and voice-over-IP (VoIP)). Past work on delay-sensitive communications has generally overlooked the physical-layer considerations such as modulation and coding scheme (MCS), probability of decoding error, and coding delay by employing oversimplified models for the physical-layer. With the proposed research we aim to bridge information theory, communication theory, and queueing theory by jointly considering the delay-violation probability and the probability of decoding error to identify the fundamental trade-offs among wireless system parameters such as channel fading speed, average received signal-to-noise ratio (SNR), MCS, and user perceived quality of service. We will model the underlying wireless channel by a finite-state Markov chain, use channnel dispersion to track the probability of decoding error and the coding delay for a given MCS, and focus on the asymptotic decay rate of buffer occupancy for queueing delay analysis. The proposed work will be used to obtain fundamental bounds on the performance of queued systems over wireless communication channels.



GHAITH SHABSIGH - LPI Performance of an Ad-Hoc Covert System Exploiting Wideband Wireless Mobile Networks

PhD Comprehensive Defense (EE)

When & Where:
May 27, 2014
9:00 am
246 Nichols Hall
Committee Members:
Victor Frost, Chair
Chris Allen
Lingjia Liu
Erik Perrins
Tyrone Duncan*

Abstract: [ Show / Hide ]
The high level of functionality and flexibility of modern wideband wireless networks, LTE and WiMAX, have made them the preferred technology for providing mobile internet connectivity. The high performance of these systems comes from adopting several innovative techniques such as Orthogonal Frequency Division Multiplexing (OFDM), Automatic Modulation and Coding (AMC), and Hybrid Automatic Repeat Request (HARQ). However, this flexibility also opens the door for network exploitation by other ad-hoc networks, like Device-to-Device technology, or covert systems. In this work effort, we provide the theoretical foundation for a new ad-hoc wireless covert system that hides its transmission in the RF spectrum of an OFDM-based wideband network (Target Network), like LTE. The first part of this effort will focus on designing the covert waveform to achieve a low probability of detection (LPD). Next, we compare the performance of several available detection methods in detecting the covert transmission, and propose a detection algorithm that would represent a worst case scenario for the covert system. Finally, we optimize the performance of the covert system in terms of its throughput, transmission power, and interference on/from the target network.



MOHAMMED ALENAZI - Network Resilience Improvement and Evaluation Using Link Additions

PhD Comprehensive Defense (CS)

When & Where:
May 23, 2014
1:00 pm
246 Nichols Hall
Committee Members:
James Sterbenz, Chair
Victor Frost
LIngjia Liu
Bo Luo
Tyrone Duncan*

Abstract: [ Show / Hide ]
Computer networks are prone to targeted attacks and natural disasters that could disrupt its normal operation and services. Adding links to form a full mesh yields the most resilient network but it incurs unfeasible high cost. In this research, we investigate the resilience improvement of real-world network via adding a cost-efficient set of links. Adding a set of link to get optimal solution using exhaustive search is impracticable given the size of communication network graphs. Using a greedy algorithm, a feasible solution is obtained by adding a set of links to improve network connectivity by increasing a graph robustness metric such as algebraic connectivity or total path diversity. We use a graph metric called flow robustness as a measure for network resilience. To evaluate the improved networks, we apply three centrality-based attacks and study their resilience. The flow robustness results of the attacks show that the improved networks are more resilient than the non-improved networks.



ASHWINI SHIKARIPUR NADIG - Statitistical Approaches to Inferring Object Shape from Single Images

PhD Dissertation Defense (EE)

When & Where:
May 20, 2014
4:00 pm
2001B Eaton Hall
Committee Members:
Bo Luo, Chair
Brian Potetz, Co-chair
Luke Huan
Jim Miller
Paul Selden*

Abstract: [ Show / Hide ]
Depth inference is a fundamental problem of computer vision with a broad range of potential applications. Monocular depth inference techniques, particularly shape from shading dates back to as early as the 40's when it was first used to study the shape of the lunar surface. Since then there has been ample research to develop depth inference algorithms using monocular cues. Most of these are based on physical models of image formation and rely on a number of simplifying assumptions that do not hold for real world and natural imagery. Very few make use of the rich statistical information contained in real world images and their 3D information. There have been a few notable exceptions though. The study of statistics of natural scenes has been concentrated on outdoor natural scenes which are cluttered. Statistics of scenes of single objects has been less studied, but is an essential part of daily human interaction with the environment. This thesis focuses on studying the statistical properties of single objects and their 3D imagery, uncovering some interesting trends, which can benefit shape inference techniques. I acquired two databases: Single Object Range and HDR (SORH) and the Eton Myers Database of single objects, including laser-acquired depth, binocular stereo, photometric stereo and High Dynamic Range (HDR) photography. The fractal structure of natural images was previously well known, and thought to be a universal property. However, my research showed that the fractal structure of single objects and surfaces is governed by a wholly different set of rules. Classical computer vision problems of binocular and multi-view stereo, photometric stereo, shape from shading, structure from motion, and others, all rely on accurate and complete models of which 3D shapes and textures are plausible in nature, to avoid producing unlikely outputs. Bayesian approaches are common for these problems, and hopefully the findings on the statistics of the shape of single objects from this work and others will both inform new and more accurate Bayesian priors on shape, and also enable more efficient probabilistic inference procedures.



STEVE PENNINGTON - Spectrum Coverage Estimation Using Large Scale Measurements

PhD Dissertation Defense (EE)

When & Where:
May 19, 2014
11:30 am
246 Nichols Hall
Committee Members:
Joseph Evans, Chair
Arvin Agah
Victor Frost
Gary Minden
Ronald Aust*

Abstract: [ Show / Hide ]
The work presented in this thesis explores the use of geographic data and geostatistical methods to estimate path loss for cognitive radio networks. Path loss models typically employed in this scenario use a general terrain type (i.e., urban, suburban, or rural) and possibly a digital elevation model to predict excess path loss over the free space model. Additional descriptive knowledge of the local environment can be used to make more accurate path loss predictions. This research focuses on the use of visible imagery, digital elevation models, and terrain classification systems for predicting localized propagation characteristics. A low-cost data collection platform was created and used to generate a sufficiently large spectrum measurement set for machine learning. A series of path loss models were fitted to the data using linear and nonlinear methods. These models were then used to create a radio environment map depicting estimated signal strength. All of the models created have good cross-validated prediction results when compared to existing path loss models, although some of the more flexible models had a tendency to overfit the data. A number of geostatistical models were fitted on the data as well.
These models have the advantage of not requiring the transmitter location in order to create a model. The geostatistical models performed very well when given a sufficient density of observations but were not able to generalize as well as some of the regression models. An analysis of the geographical data sets indicated that each had a significant measurable effect on path loss estimation, with the medium resolution imagery and elevation data providing the greatest increase in accuracy. Finally, these models were compared to number of existing path loss models, demonstrating a gain in usable spectrum for cognitive radio network use.



BENJAMIN EWY - Collaborative Approaches to Probabilistic Reasoning in Network Management

PhD Dissertation Defense (EE)

When & Where:
May 19, 2014
9:30 am
246 Nichols Hall
Committee Members:
Joseph Evans, Chair
Arvin Agah
Victor Frost
Gary Minden
Bozenna Pasik-Duncan

Abstract: [ Show / Hide ]
Tactical networks, networks designed to facilitate command and control capabilities for militaries, have key attributes that differ from the commercial Internet. Characterizing, modeling, and ex- ploiting our understanding of these differences is the focus of this research.
The differences between tactical and commercial networks can be found primarily in the areas of access bandwidth, access diversity, access latency, core latency, subnet distribution, and network infrastructure. In this work we characterize and model these differences. These key attributes affect research into issues such as peer-to-peer protocols, service discovery, and server selection among others, as well as the deployment of services and systems in tactical networks. Researchers traditionally struggle with measuring, analyzing, or testing new ideas on tactical networks due to a lack of direct access, and thus this characterization is crucial to evolving this research field.
In this work we develop a topology generator that creates realistic tactical networks that can be visualized, analyzed, and emulated.
Topological features including geographically constrained line of sight networks, high density low bandwidth satellite networks, and the latest high bandwidth on- the-move networks are captured. All of these topological features can be mixed to create realistic networks for many different tactical scenarios. A web based visualization tool is developed, as well as the ability to export topologies to the Mininet network virtualization environment.
Finally, state-of-the-art server selection algorithms are reviewed and found to perform poorly for tactical networks. We develop a collaborative algorithm tailored to the attributes of tactical networks, and utilize our generated networks to assess the algorithm, finding a reduction in utilized bandwidth and a significant reduction in client to server latency as key improvements.




MEENAKSHI MISHRA - Task Relationship Modeling in Multitask Learning with Applications to Computational Toxicity

PhD Comprehensive Defense (EE)

When & Where:
May 6, 2014
1:00 pm
246 Nichols Hall
Committee Members:
Luke Huan, chair
Arvin Agah
Swapan Chakrabarti
Ron Hui
Zhou Wang*

Abstract: [ Show / Hide ]
Multitask Learning is a learning framework which explores the concept of sharing training information among multiple related tasks to improve the generalization error of each task. The benefits of multitask learning have been shown both empirically and theoretically. There are a number of fields that benefit from multitask learning, including toxicology. However, the current multitask learning algorithms make a very important key assumption that all the tasks are related to each other in a similar fashion in multitask learning. The users often do not have the knowledge of which tasks are related and train all tasks together. This results in sharing of training information even among the unrelated tasks. Training unrelated tasks together can cause a negative transfer and deteriorate the performance of multitask learning. For example, consider the case of predicting in vivo toxicity of chemicals at various endpoints from the chemical structure. Toxicity at all the endpoints are not related. Since, biological networks are highly complex, it is also not possible to predetermine which endpoints are related. Thus, training all the endpoints together may cause a negative effect on the overall performance. This proposal aims at developing algorithms which make use of task relationship models to further improve the generalization error and prevent transfer of information among the unrelated tasks. The algorithms proposed here either learn the task relationships or utilize the known task relationships in the learning framework. Further, these algorithms will be utilized to predict toxicity of chemicals at various endpoints using the chemical structures and the results of multiple in vitro assays performed on these chemicals.



YINGYING MA - A Comparison of Two Discretization Options of the MLEM2 Algorithm

MS Project Defense (CS)

When & Where:
May 5, 2014
1:00 pm
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse
Luke Huan
Prasad Kulkarni

Abstract: [ Show / Hide ]
A rule set is a popular symbolic representation of knowledge derived from
data. A rule induction is an important technique of data mining or machine
learning. Many rule induction algorithms are widely used, such as LEM1, LEM2 and MLEM2. Some of these algorithms perform better on special data, e. g., on inconsistent data set or data sets with missing attribute values. This work discusses basic ideas of the MLEM2 algorithm, especially, how it handles data sets with numeric attribute values. Additionally, a comparison of the performance of different discretization options of the MLEM2 algorithm is also included.



FRANK MOLEY - Maintaining Privacy and Security of Personally Identifiable Information Data in a Connected System

MS Project Defense (IT)

When & Where:
April 22, 2014
4:00 pm
280 Best-Edwards Campus
Committee Members:
Hossein Saiedian, Chair
Fengjun Li
Bo Luo

Abstract: [ Show / Hide ]
The large data stores of Personally Identifiable Information (PII) in todays connected systems, coupled with the increased potential damages of Identity Theft bring the need for architectures that provide secure collection, storage, and transmission of this data. The need has not yet been standardized in the industry in a way similar to the Payment Card Industry (PCI) has done so. At the same time, however, municipalities, states, and even countries are instituting legislature that requires business entities that store PII data to maintain adequate security of the data. The need has become clear for a set of processes, procedures, and systems that provide a framework for securely storing PII data. This project defines the lower level datastore system and associated services for that PII data. It also outlines a network architecture prototype for providing segmented security zones used to provide more layers of security in a connected system.



KALYANI HARIDASYAM - AskMyNetwork: Finding Reliable Feedback and Reviews

MS Project Defense (IT)

When & Where:
April 22, 2014
1:00 pm
280 Best Hall-Edwards
Committee Members:
Hossein Saiedian, Chair
Fengjun Li
Bo Luo

Abstract: [ Show / Hide ]
We all consult online reviews before obtaining a product or service. However, not all the reviews can be trusted. For example, in 2013, "Operation Clean Turf” a yearlong sting operation in New York State, caught 19different companies that were writing fake reviews in online forums like Yelp for businesses that paid them. For my project, I've developed an application called AskMyNetwork. AskMyNetwork interfaces with Facebook to obtain feedback or input from a user's Facebook friends.The rationale for my project is that the feedback or inputs are from "friends" (personal friends, family members,or colleagues in a user's Facebook friends' list) and can be trusted.

AskMyNetwork has four major components namely, Login,Search My Network, Ask My Network and Notifications. Using the Login component, the user can login to the application with Facebook credentials. Using Search My Network component, the user can define search criteria (e.g.,search for restaurant in Kansas City) and search his or her Facebook data for relevant results. Using Ask My Network component, the user can ask a group of friends question about a product or service they would like an opinion on. The group of friends can either be chosen by name or by the current location of the friends. Using the Notifications component, the user can view the responses given to questions asked from AskMyNetwork.

I validated AskMyNetwork via a number of inquiries on topics such as restaurants, places to visit in a city and arts. The results of the validation were satisfactory.




MUHARREM ALI TUNC - LPTV-Aware Bit Loading and Channel Estimation in Broadband PLC for Smart Grid

PhD Dissertation Defense (EE)

When & Where:
April 21, 2014
11:15 am
246 Nichols Hall
Committee Members:
Erik Perrins, Chair
Shannon Blunt
Lingjia Liu
James Sterbenz
Atanas Stefanov*

Abstract: [ Show / Hide ]
Power line communication (PLC) has received steady interest over recent decades because of its economic use of existing power lines, and is one of the communication technologies envisaged for Smart Grid infrastructure. However, power lines are not designed for data communication, and this brings unique challenges for data communication over power lines. In particular for broadband (BB) PLC, the channel exhibits linear periodically time varying (LPTV) behavior synchronous to the AC mains cycle due to time varying impedances, impulsive noise due to switching events in the power line network is present in addition to background noise. In this work, we focus on two major aspects of an orthogonal frequency division multiplexing (OFDM) system for BB PLC LPTV channels; bit and power allocation, and channel estimation (CE).

For the problem of optimal bit and power allocation, we present that the application of a power constraint that is averaged over many microslots can be exploited for further performance improvements through bit loading. Due to the matroid structure of the optimization problem, greedy-type algorithms are proven to be optimal for the new LPTV-aware bit and power loading. Next, two mechanisms are utilized to reduce the complexity of the optimal LPTV-aware bit loading and peak microslot power levels: employing representative values from microslot transfer functions, and power clipping.

Next, we introduce a robust CE scheme with low overhead that addresses the drawbacks of block-type pilot arrangement and decision directed CE schemes such as large estimation overhead, and difficulty in channel tracking in the case of sudden changes in the channel, respectively. A transform domain (TD) analysis approach is developed to determine the cause of changes in the channel estimates. The result of TD analysis is then exploited in the proposed scheme to mitigate the effects of LPTV channel and impulsive noise.

Our results indicate that the proposed reduced complexity LPTV-aware bit loading with power clipping algorithm performs close to the optimal scheme, and the proposed CE scheme based on TD analysis has low estimation overhead and is robust to changes in the channel and noise, making them good alternatives for BB PLC LPTV channels.



BRIAN CORDILL - Radar System Enhancement through High Fidelity Electromagnetic Modeling

PhD Dissertation Defense (EE)

When & Where:
April 18, 2014
12:00 pm
129 Nichols
Committee Members:
Sarah Seguin, Chair
Shannon Blunt
Chris Allen
Jim Stiles
Mark Ewing*

Abstract: [ Show / Hide ]
Many of the innovative algorithms that permeate the field of array processing are based on a very simple signal model of an array. This simple, although powerful, model is at times a pale reflection of the complexities inherent in the physical world, and this model mismatch opens the door to the performance degradation of any solution for which the model underpins. This dissertation seeks to explore the impact of model mismatch upon common array processing algorithms. Model mismatch is examined in two ways: First, by developing a blind array calibration routine that estimates model mismatch and incorporates that knowledge into the RISR direction of arrival estimation algorithm. Second, by examining model mismatch between a transmitting and receiving antenna array, and assessing the impact of this mismatch on prolific direction of arrival estimation algorithms. In both of these studies it is shown that engineers have traded algorithm performance of model simplicity, and that if we are willing to deal with the added complexity we can recapture that lost performance.




JOSHUA DAVIS - A Covert Channel Using Named Resources

MS Thesis Defense (CoE)

When & Where:
April 14, 2014
1:00 pm
246 Nichols Hall
Committee Members:
Victor Frost, Chair
Fengjun Li
Bo Luo

Abstract: [ Show / Hide ]
A method of transmitting information clandestinely over a variety of network protocols is designed and discussed. A demonstrative implementation is created that utilizes the ubiquitous Hypertext Transfer Protocol (HTTP) and the world wide web. Key contributions include the use of access ordering to convey information, and the modulation of transaction level timing to emulate user behavior.




NAHAL NIAKAN - Mutual Coupling Reduction Between Closely Spaced U-slot Patch Antennas by Optimizing Array Configuration and Its Applications in MIMO

MS Thesis Defense (EE)

When & Where:
March 31, 2014
3:30 pm
2001B Eaton Hall
Committee Members:
Sarah Seguin, Chair
Chris Allen
Jim Stiles

Abstract: [ Show / Hide ]
Multiple-input, multiple-output (MIMO) systems have received considerable attention over the last decade due to their ability to provide high throughputs and mitigate multipath fading effects. There are some limitations to get the most from MIMO, such as mutual coupling between
antenna elements in an array. Mutual coupling and therefore inter element spacing have important effect on the channel capacity of MIMO communication system, its error rate and ambiguity of MIMO radar system. There are huge numbers of researches that focus on reducing the mutual coupling in antenna arrays and improve MIMO performance. Antenna design affects the performance of Multiple-Input–Multiple-output (MIMO) systems. Two aspects of antenna role in MIMO performance have been investigated in this thesis. Employing suitable antenna can have significant impact on performance of MIMO system. In addition to antenna design another antenna related issue that helps to optimize the system performance is to reduce mutual coupling between antenna elements in an array.Effect of antenna configuration in array on mutual coupling has been studied in this research. Main purpose is to find the array configuration which provides minimum mutual coupling between elements. U-slot patch antenna which because of its features like wide bandwidth ,multi band resonance and ease to achieve different polarizations has attracted lots of researchers has been used in this study.




ZAID HAYYEH - Exploiting Wireless Networks for Covert Communications

PhD Dissertation Defense (EE)

When & Where:
March 31, 2014
2:30 pm
246 Nichols Hall
Committee Members:
Victor Frost, Chair
Shannon Blunt
Erik Perrins
David Petr
Jeffrey Lang*

Abstract: [ Show / Hide ]
The desire to hide communications has existed since antiquity. This includes hiding the existence of the transmission and the location of the sender. Wireless networks offer an opportunity for hiding a transmission by placing a signal in the radio frequency (RF) occupied by a target network which also has the added benefit of lowering its probability of detection.

This research hides a signal within the RF environment of a packet based wireless (infrastructure) network. Specifically, in this research the interfering (covert) signal is placed in the guard band of the target network’s orthogonal frequency division multiplexed (OFDM) signal. We show that the existence of adaptive protocols allow the target network to adjust to the existence of the covert signal. In other words, the wireless network views the covert network as a minor change in the RF environment; this work shows that the covert signal can be indistinguishable from other wireless impairments such as fading.

The impact of the covert signal on the target system performance is discovered through analysis and simulation; the analysis and simulation begin at the physical layer where the interaction between the target and covert systems occurs. After that, analysis is performed on the impact of the covert link on the target system at data-link layer. Finally, we analyze the performance of the target system at the transmission control protocol (TCP) layer which characterizes the end-to-end performance. The results of this research demonstrate the potential of this new method for hiding the transmission of information. The results of this research could encourage the creation of new protocols to protect these networks from exploitation of this manner.



RAMESH KUMAR DUGAR - Pulsed Doppler Lidar for Velocity Measurement using Coherent Detection

MS Project Defense (EE)

When & Where:
March 28, 2014
1:00 pm
250 Nichols Hall
Committee Members:
Ron Hui, Chair
Glenn Prescott
Jim Stiles

Abstract: [ Show / Hide ]
Measurement of wind velocity is of essential to enhance the wind energy utilization which is very important considering the fact that it one of most important renewable source of energy and LIDAR (Light Detection and Ranging) has become a very popular technology for such measurements. In this study, a pulsed Doppler Lidar operating at 1.5µm is demonstrated with coherent detection technique for measurement of velocity of spinning disc which is a hard target used in this project. This Lidar uses the principle of Doppler shift to measure the velocity and an Acousto-optic modulator is used for frequency shifting in the transmitter to produce an intermediate frequency. A data acquisition board (DAQ) was used to generate the pulses and also to process the data once it was collected by the receiver using mat lab. A graphical user interface was used to interface the DAQ with the system and changing parameters like PRF, pulse width, record directory etc. could be changed directly from the computer. A thorough study of literature has been done and same has been presented. The architecture of the Lidar, velocity results, future work and an analysis of SNR’s dependence on range and pulse energy under predefined atmospheric conditions will be discussed.



SREE HARSHA KAKARLA - Design of Transmitter and SMPS for Blood Oxygen Level Tomography

MS Project Defense (EE)

When & Where:
March 27, 2014
3:30 pm
246 Nichols Hall
Committee Members:
Ron Hui, Chair
Chris Allen
James Stiles

Abstract: [ Show / Hide ]
Ever since the invention of near infrared optical spectroscopy almost three decades ago, research has still been going actively to improve the accuracy of biological tissue oxygenation measurements and make it commercially available in clinics for medical diagnosis and imaging. Hemoglobin concentration in blood especially near brain can be found by determining the absorption and scattering coefficient of chromosphere at a particular wavelength. But there are many challenges to overcome when infrared light penetrates deeper into our skull which acts as a high scattering medium. This improvement has taken a new shape with the application of diffusion theory to separate the absorbed light and scattered light in tissues. With this motivation, in this project we designed a dual-frequency (120MHz and 125MHz) based two wavelengths LED transmitting system to transmit optical power through fibers and penetrate NIR light into a scattering medium which is then received by the detection and demodulation circuit for data acquisition. Different methods available to measure absorption and reduced scattering coefficient for non-homogenous medium will be discussed.



DONGSHENG ZHANG - Modeling Critical Node Attacks in Mobile Wireless Networks

PhD Comprehensive Defense (CS)

When & Where:
February 21, 2014
10:00 am
246 Nichols Hall
Committee Members:
James Sterbenz, Chair
Victor Frost
Fengjun Li
Gary Minden
Bernhard Plattner
Caterina Scoglio
John Symons*

Abstract: [ Show / Hide ]
Understanding network behavior under challenges is essential to constructing a resilient and survivable network. Due to the mobility and wireless channel properties, it is more difficult to model and analyze mobile wireless networks under various challenges. We provide a comprehensive model to analyze malicious attacks against mobile ad hoc networks. We analyze comprehensive graph-theoretical properties and network performance of the dynamic networks under attacks against the critical nodes using both synthetic and real-world mobility traces. Our study provides insights into the design and construction of resilient and survivable mobile wireless networks.



JOHN GIBBONS - Modeling Content Lifespan in Online Social Networks Using Data Mining

PhD Dissertation Defense (CS)

When & Where:
February 3, 2014
2:30 pm
246 Nichols Hall
Committee Members:
Arvin Agah, Chair
Perry Alexander
Jerzy Grzymala-Busse
Jim Miller
Prajna Dhar*

Abstract: [ Show / Hide ]
Online Social Networks (OSNs) are integrated into business, entertainment, politics, and education; they are integrated into nearly every facet of our everyday lives. They have played essential roles in milestones for humanity, such as the social revolutions in certain countries, to more day-to-day activities, such as streaming entertaining or educational materials. Not surprisingly, social networks are the subject of study, not only for computer scientists, but also for economists, sociologists, political scientists, and psychologists, among others. In this dissertation, we build a model that is used to classify content on the OSNs of Reddit, 4chan, Flickr, and YouTube according the types of lifespan their content have and the popularity tiers that the content reaches. The proposed model is evaluated using 10-fold cross-validation, using data mining techniques of Sequential Minimal Optimization (SMO), which is a support vector machine algorithm, Decision Table, Naïve Bayes, and Random Forest. The run times and accuracies are compared across OSNs, models, and data mining algorithms.
Our experiments compared the runtimes and accuracy of SMO, Naïve Bayes, Decision Table, and Random Forest to classify the lifespan of content on Reddit, 4chan, and Flickr as well as classify the popularity tier of content on Reddit, 4chan, Flickr, and YouTube. The experimental results indicate that SMO is capable of outperforming the other algorithms in runtime across all OSNs. Decision Table has the longest observed runtimes, failing to complete analysis before system crashes in some cases. The statistical analysis indicates, with 95% confidence, there is no statistically significant difference in accuracy between the algorithms across all OSNs. Reddit content was shown, with 95% confidence, to be the OSN least likely to be misclassified. All other OSNs, were shown to have no statistically significant difference in terms of their content being more or less likely to be misclassified when compared pairwise with each other.



MIKE ZAKHAROV - Designing a Multichannel Sense-and-Avoid Radar for Small UASs

MS Thesis Defense (EE)

When & Where:
January 30, 2014
11:00 am
2001B Eaton Hall
Committee Members:
Chris Allen, Chair
Ron Hui
Jim Stiles

Abstract: [ Show / Hide ]
To enhance the capabilities of autonomous flight systems for Unmanned Aircraft Systems (UASs), a multichannel Frequency-Modulated Continuous Wave (FMCW) collision-avoidance radar with a center frequency of 1.445 GHz is designed. The radar is intended to provide situational awareness for a 40% Yak-54 model aircraft by providing in real time range, radial velocity and angle-of-arrival (AoA) information on surrounding targets with an update rate of 10 Hz. A target’s range and Doppler is determined by employing a two-dimensional (2-D) Fast Fourier Transform (FFT) on the received signal which maps the target to a specific range-Doppler bin. Tests have shown that the proto-type radar is capable of providing range detection up to 430 m with an accuracy of 0.6 m for a target with a 1-m2 radar cross section (RCS). The radar is designed to provide a Doppler resolution of 10 Hz. An array of receiving antennas is used to determine a target’s elevation and azimuth angles by exploiting the received signal’s phase difference at each individual antenna. The AoA measurement error due to thermal noise was found to be less than 3° for a signal-to-noise ratio (SNR) of 18 dB.



YEFENG SUN - Optical Absorption Simulation by ZnTe/CdTe Superlattices Based on Kronig-Penney Model

MS Project Defense (EE)

When & Where:
January 23, 2014
3:30 pm
246 Nichols Hall
Committee Members:
Ron Hui, Chair
Ken Demarest
Victor Frost

Abstract: [ Show / Hide ]
Nowadays superlattices (SLs) are widely used as optical materials due to optical absorption properties. Short-period superlattices with certain optical properties such as InAs/GaSb type-II superlattices and ZnTe/CdTe superlattices can serve for mid-infrared (MIR) detection and solar cells. In this study, a standard Kronig-Penney model is applied to calculate the mini band structure of such SLs. On the basis of the energy-balance equation derived from the Boltzmann equation, a simple approach is used to calculate the optical absorption coefficient for the corresponding SL systems. Comparison of simulation results and experimental findings will be made in this study. And reasonable causes of error and future work will be discussed.



ADAM VAN HORN - Machine Learning Techniques for High Performance Engine Calibration

MS Thesis Defense (CS)

When & Where:
January 17, 2014
10:00 am
2001B Eaton Hall
Committee Members:
Arvin Agah, Chair
Jerzy Grzymala-Busse
James Miller
Christopher Depcik*

Abstract: [ Show / Hide ]
Ever since the advent of electronic fuel injection, auto manufacturers have been able to increase fuel efficiency and power production, and to meet stricter emission standards. Most of these systems use engine sensors (RPM, Throttle Position, etc.) in concert with look-up tables to determine the correct amount of fuel to inject. While these systems work well, it is time and labor intensive to fine tune the parameters for these look-up tables. Automobile manufacturers are able to absorb the cost of this calibration since the variation between engines in a new model line is often small enough as to be inconsequential for a specific calibration.

However, a growing number of drivers are interested in modifying their vehicles with the intent of improving performance. While some aftermarket performance upgrades can be accounted for by the original manufacturer equipped (OEM) electronic control unit (ECU), other more significant changes, such as adding a turbocharger or installing larger fuel injectors, require more drastic accommodations. These modifications often require an entirely new ECU calibration or an aftermarket ECU to properly control the upgraded engine. The problem then, is that the driver becomes responsible for the calibration of the ECU of this “new” engine. However, most drivers are unable to devote the resources required to achieve a calibration of the same quality as the original manufacturers. At best, this results in reduced fuel economy and performance, and at worst, unsafe and possibly destructive operation of the engine.

The purpose of this thesis is to design and develop—using machine learning techniques—an approximate predictive model from current engine data logs, which can be used to rapidly and incrementally improve the calibration of the engine. While there has been research into novel control methods for engine air-fuel ratio control, these methods are inaccessible to the majority of end users, either due to cost or the required expertise with engine calibration. This study shows that there is a great deal of promise in applying machine learning techniques to engine calibration and that the process of engine calibration can be expedited by the application of these techniques.



LANE RYAN - Polyphase-Coded FM Waveform Optimization within a LINC Transmit Architecture

MS Thesis Defense (EE)

When & Where:
January 16, 2014
10:00 am
246 Nichols Hall
Committee Members:
Chris Allen, Chair
Shannon Blunt
Jim Stiles

Abstract: [ Show / Hide ]
Linear amplification using nonlinear components (LINC) is a design approach that can suppress the effects of the nonlinear distortion introduced by the transmitter. A typical transmitter design requirement is for the high power amplifier to be operated in saturation. The LINC approach described here employs a polyphase-coded FM (PCFM) waveform that is able to overcome this saturated amplifier distortion to greatly improve the spectral containment of the transmitted waveform. A two stage optimization process involving simulation and hardware-in-the-loop routines is used to create the final PCFM waveform code.



YUFEI CHENG - Performance Analysis of Different Traffic Types in Mobile Ad-hoc Networks

MS Thesis Defense (EE)

When & Where:
January 7, 2014
1:30 pm
246 Nichols Hall
Committee Members:
James Sterbenz, Chair
Fengjun Li
Gary Minden

Abstract: [ Show / Hide ]
Mobile Ad Hoc networks~(MANETs) present great challenges to new protocol design, especially in scenarios where nodes are high mobile. Routing protocols performance is essential to the performance of wireless networks especially in mobile ad-hoc scenarios. The development of new routing protocols requires comparing them against well-known protocols in various simulation environments. Furthermore, application traffic like transactional application traffic has not been investigated for domain-specific MANETs scenarios. Overall, there are not enough performance comparison work in the past literatures. This thesis presents extensive performance comparison work with MANETs and uses inclusive parameter sets including both highly-dynamic environment as well as low-mobility cases.