Skip redundant pieces

Electrical Engineering and Computer Science

Defense Notices

EECS MS and PhD Defense Notices for

All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.




Upcoming Defense Notices


RANJITH SOMPALLI - Computational Methods to Enable an Invertebrate Paleontology Knowledge Base

MS Project Defense (CS)

When & Where:
May 20, 2016
2:00 pm
2001B Eaton Hall
Committee Members:
Bo Luo, Chair
Jerzy Grzymala-Busse
Richard Wang

Abstract: [ Show / Hide ]
The Treatise on Invertebrate Paleontology is the most authoritative compilation of the invertebrate fossil records. The quality of studies in paleontology, in particular depends on the accessibility of fossil data. Unfortunately, the PDF version of Treatise currently available is just a scanned copy of the paper publications and the content is in no way organized to facilitate search and knowledge discovery. This project builds an Information Retrieval based system, to extract the fossil descriptions, images and other available information from Treatise. This project is divided into two parts. The first part deals with the extraction of the text and images from the Treatise, organize the information in a structured format and store in a relational database, build a search engine to browse fossil data. Extracting text requires identifying common textual patterns and a text parsing algorithm is developed to identify the patterns and organize the information in a structural format. Images are extracted using the image processing techniques like image segmentation, morphological operations etc., and then associated with the corresponding textual descriptions. A Search engine is built to efficiently browse the extracted information and also the web interface provides options to perform many useful tasks with ease. The second part of this research focuses on the implementation of Content Based Information Retrieval System. All images from treatise are grayscale fossil images and identifying the matching images based on the visual image features is a very difficult task. Hence, we employed an approach that integrates textual and visual features to identify matching images. Textual features are extracted from the description of the fossils and using statistical approaches and Parts of Speech tagging approaches, an ontology is generated, that forms attribute – property pairs explaining how a region looks like in each shell. Popular image features like SIFT, GIST, and HOG features are extracted from fossil images. Both the textual and image features are then integrated to extract the information related to the fossil image matching the query image.



NAGABHUSHANA GARGESHWARI MAHADEVASWAMY - How Duplicates Affect the Error Rate of Data Sets During Validation

MS Project Defense (CS)

When & Where:
May 20, 2016
12:00 pm
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo

Abstract: [ Show / Hide ]
In data mining, duplicate data plays a huge role in deciding the set of rules. In this project, an analysis has been made on finding the impact of duplicates in the input data set on the rule set. The effect of duplicates is being analyzed using the error rate factor. Error rate is calculated by comparing the obtained rule set against the testing part of input data. The results of experiments have shown decrement of error rate with the increase of percentage of duplicates in the input data set, which demonstrates that the duplicate data plays a crucial role in validation process of machine learning. LEM2 algorithm and rule checker application have been implemented as a part of project. LEM2 algorithm is used to induce the rule set for the given input data set and rule checker application is used to calculate the error rate.



RITANKAR GANGULY - Graph Search Algorithms and Their Applications

MS Project Defense (CS)

When & Where:
May 12, 2016
2:00 pm
2001B Eaton Hall
Committee Members:
Man Kong, Chair
Nancy Kinnersley
Jim Miller

Abstract: [ Show / Hide ]
Depth- First Search (DFS) and Breadth- First Search are two of the most extensively used graph traversal algorithms to compile information about the graph in linear time. These two graph traversal mechanisms overlay a path to explore further the applications based on them that are widely used in Network Engineering, Web Analytics, Social Networking, Postal Services and Hardware Implementations. The difference between DFS and BFS results in the order in which they explore vertices and the implementation techniques for storing the discovered but un-processed vertices in the graph. BFS algorithm usually needs less time but consumes more computer memory than a DFS implementation. DFS algorithm is based on LIFO mechanism and is implemented using stack. BFS algorithm is based on FIFO technique and is realized using a queue. The order in which the vertices are visited using DFS or BFS can be realized with the help of a tree. The type of graph (directed or undirected) along with the edges of these trees form the basis of all the applications on BFS or DFS. Determining the shortest path between vertices of an un-weighted graph can be used in network engineering to transfer data packets. Checking for the presence of cycle can be critical in minimizing redundancy in telecommunications and is extensively used by social networking websites these days to analyse information as how people are connected. Finding bridges in a graph or determining the set of articulation vertices help minimize vulnerability in network design. Finding the strongly connected components in a graph can be used by model checkers in computer science. Determining an Euler circuit in a graph can be used by the postal service industries and the algorithm can be successfully implemented with linear running time using enhanced data structures. This survey project briefly defines and explains the basics of DFS and BFS traversal and explores some of the applications that are based on these algorithms.




MICHAEL BLECHA - Implementation of a 2.45GHz Power Amplifier for use in Collision Avoidance Radar

MS Project Defense (EE)

When & Where:
May 10, 2016
2:30 pm
2001B Eaton Hall
Committee Members:
Chris Allen, Chair
Glenn Prescott
Jim Stiles

Abstract: [ Show / Hide ]
The integration of a RF power amplifier into a Collision Avoidance Radar will increase the maximum detection distance of the radar. Increasing the maximum detection distance will allow a radar system mounted on an Unmanned Aircraft Vehicle to observe obstacles earlier and give the UAV more time to react. The UAVradars project has been miniaturized to support operation on an unmanned aircraft and could benefit from an increase in maximum detection distance.
The goal of this project is to create a one watt power amplifier for the 2.4GHz-2.5GHz band that can be integrated into the UAVradars project. The amplifier will be powered from existing power supplies in the radar system and must be small and lightweight to support operation on board the UAV in flight. This project will consist of the schematic and layout design, simulations, fabrication, and characterization of the power amplifier. The power amplifier will be designed to fit into the current system with minimal system modifications required.



HARSHUL ROUTHU - A Comparison of Two Decision Tree Generating Algorithms C4.5 and CART Based on Testing Datasets with Missing Attribute Values

MS Project Defense (CS)

When & Where:
May 10, 2016
10:00 am
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo

Abstract: [ Show / Hide ]
In data mining, missing data are a common occurrence and can have a significant effect on the conclusions that can be drawn from the data. Classification of missing data is a challenging task. One of the most popular techniques for classifying missing data is decision tree induction.
In this project, we compare two decision tree generating algorithms CART and C4.5 with their original implementations on different datasets with missing attribute values, taken from University of California Irvine (UCI). The comparative analysis of these two implementations is carried out in terms of accuracy on training and testing data, and decision tree complexity based on its depth and size. Results from experiments show that there is statistically insignificant difference between C4.5 and CART in terms of accuracy on testing data and complexity of the decision tree. On the other hand, accuracy on training data is significantly better for CART compared to C4.5.



HADEEL ALABANDI - A Survey of Metrics Employed to Assess Software Security

MS Thesis Defense (CS)

When & Where:
May 9, 2016
3:00 pm
246 Nichols Hall
Committee Members:
Prasad Kulkarni, Chair
Andy Gill
Heechul Yun

Abstract: [ Show / Hide ]
Measuring and assessing software security is a critical concern as it is undesirable to develop risky and insecure software. Various measurement approaches and metrics have been defined to assess software security. For researchers and software developers, it is significant to have different metrics and measurement models at one place either to evaluate the existing measurement approaches, to compare between two or more metrics or to be able to find the proper metric to measure the software security at a specific software development phase. There is no existing survey of software security metrics that covers metrics available at all the software development phases. In this paper, we present a survey of metrics used to assess and measure software security, and we categorized them based on software development phases. Our findings reveal a critical lack of automated tools, and the necessity to possess detailed knowledge or experience of the measured software as the major hindrances in the use of existing software security metrics.




HARISH SAMPANGI - Delay Feedback Reservoir (DFR) Design in Neuromorphic Computing Systems and its Application in Wireless Communications

MS Project Defense (EE)

When & Where:
May 9, 2016
2:00 pm
2001B Eaton Hall
Committee Members:
Yang Yi, Chair
Glenn Prescott
Jim Rowland

Abstract: [ Show / Hide ]
As semiconductor technologies continue to scale further into the nanometer regime, it is important to study how non-traditional computer architectures may be uniquely suited to take advantage of the novel behavior observed for many emerging technologies. Neuromorphic computing system represents a type of non-traditional architecture encompassing evolutionary. Reservoir computing, a computational paradigm inspired on neural systems, has become increasingly popular for solving a variety of complex recognition and classification problems. The traditional reservoir computing methods employs three different layers – the input layer, the reservoir and the output layer. The input layer feeds the input signals to the reservoir via fixed random weighted connections. These weights will scale the input that is given to the nodes, creating different input scaling for the input nodes. The second layer, which is called the reservoir, usually consists of a large number of randomly connected nonlinear nodes, constituting a recurrent network. Finally, the output weights are extracted from the output layer. Contrary to this traditional approach, the delayed feedback reservoir replaces the entire network of connected non-liner nodes just with a single nonlinear node subjected to delayed feedback. This approach does not only provide a drastic simplification of the experimental implementation of artificial neural networks for computing purposes, it also demonstrates the huge computational processing power hidden in even the simplest delay-dynamical system. Previous implementation of reservoir computing using the echo state network has been proven efficient for channel estimation in wireless Orthogonal Frequency-Division Multiplexing (OFDM) systems. This project aims at verifying the performance of DFR in channel estimation, by calculating its bit error rate (BER) and comparing it with other standard techniques like the LS and MMSE.



AUDREY SEYBERT - Analysis of Artifacts Inherent to Real-Time Radar Target Emulation

MS Thesis Defense (EE)

When & Where:
May 9, 2016
10:00 am
246 Nichols Hall
Committee Members:
Chris Allen, Chair
Shannon Blunt
Jim Stiles

Abstract: [ Show / Hide ]
Executing high-fidelity tests of radar hardware requires real-time fixed-latency target emulation. Because fundamental radar measurements occur in the time domain, real-time fixed latency target emulation is essential to producing an accurate representation of a radar environment. Radar test equipment is further constrained by the application-specific minimum delay to a target of interest, a parameter that limits the maximum latency through the target emulator algorithm. These time constraints on radar target emulation result in imperfect DSP algorithms that generate spectral artifacts. Knowledge of the behavior and predictability of these spectral artifacts is the key to identifying whether a particular suite of hardware is sufficient to execute tests for a particular radar design. This work presents an analysis of the design considerations required for development of a digital radar target emulator. Further considerations include how the spectral artifacts inherent to the algorithms change with respect to the radar environment and an analysis of how effectively various DSP algorithms can be used to produce an accurate representation of simple target scenarios. This work presents a model representative of natural target motion, a model that is representative of the side effects of digital target emulation, and finally a true HDL simulation of a target.



CHRISTOPHER SEASHOLTZ - Security and Privacy Vulnerabilities in Unmanned Aerial Vehicles

MS Project Defense (CoE)

When & Where:
May 6, 2016
3:30 pm
246 Nichols Hall
Committee Members:
Bo Luo, Chair
Joe Evans
Fengjun Li

Abstract: [ Show / Hide ]
In the past few years, UAVs have become very popular amongst the average citizen. Much like their military counterpart, these UAVs provide the ability to be controlled by computers, instead of a remote controller. While this may not appear to be a major security issue, the information gained from compromising a UAV can be used for other malicious activities. To understand potential attack surfaces of various UAVs, this paper presents the theory behind multiple possible attacks, as well as implementations of a select number of attacks mentioned. The main objective of this project was to obtain complete control of a UAV while in flight. Only a few of the attacks demonstrated, or mentioned, provide this ability. The remaining attacks mentioned provide information that can be used in conjunction with others in order to provide full control, or complete knowledge, of a system. Once the attacks have been proven possible, measures for proper defense must be taken. For each attack described in this paper, possible countermeasures will be given and explained.



ARIJIT BASU - Analyzing Bag of Visual Words for Efficient Content Based Image Retrieval and Classification

MS Project Defense (CS)

When & Where:
May 6, 2016
11:00 am
250 Nichols Hall
Committee Members:
Richard Wang, Chair
Prasad Kulkarni
Bo Luo

Abstract: [ Show / Hide ]
Content Based Image Retrieval also known as QBIC (Query by Image Content) is a retrieval technique where detailed analysis of the features of an image is done for retrieving similar images from the image base. Content refers to any kind of information that can derived from the image itself like textures, color, shape which are primarily global features and local features like Sift, Surf, Hog etc. Content Based image retrieval as opposed to traditional text based image retrieval has been in the limelight for quite a while owing to its contribution in putting away too much responsibility from the end user and trying to bridge the semantic gap between low level features and high level human perception.
Image Categorization is the process of classifying distinct image categories based on image features extracted from a subset of images or the entire database from each category followed by feeding it to a machine learning classifier which predicts the category labels eventually. Bag of Words Model is a very well known flexible model that represents an image as a histogram of visual patches. The idea originally comes from application of Bag of Words model in document retrieval and texture classification. Clustering is a very important aspect of the BOW model. It helps in grouping identical features from the entire dataset and hence feeding it to the Support Vector Machine Classifier. The SVM classifier takes into account every image that has been represented as a bag of visual features after clustering and then performs quality predictions. In this work we first apply the Bag of Words on well known datasets and then obtain accuracy parameters like Confusion Matrix, MCC, (Matthews Correlation Coefficient) and other statistical measures. For Feature selection we considered SURF Features owing to their rotation and scale invariant characteristics. The model has been trained and applied on two well known datasets Caltech 101 and Flickr- 25K followed by detailed performance analysis in different scenarios.




SOUMYAJIT SARKAR - Biometric Analysis of Human Ear Recognition Using Traditional Approach

MS Project Defense (CS)

When & Where:
May 4, 2016
11:00 am
246 Nichols Hall
Committee Members:
Richard Wang, Chair
Jerzy Grzymala-Busse
Bo Luo

Abstract: [ Show / Hide ]
Biometric ear authentication has received enormous popularity in recent years due to its uniqueness for each and every individual, even for identical twins. In this paper, two scale and rotation invariant feature detectors, SIFT and SURF, are adopted for recognition and authentication of ear images. An extensive analysis has been made on how these two descriptors work under certain real-life conditions; and a performance measure has been given. The proposed technique is evaluated and compared with other approaches on two data sets. Extensive experimental study demonstrates the effectiveness of the proposed strategy. Robust Estimation algorithm has been implemented to remove several false matches and improved results have been provided. Deep Learning has become a new way to detect features in objects and is also used extensively for recognition purposes. Sophisticated deep learning techniques like Convolutional Neural Networks(CNNs) have also been implemented and analysis has been done.Deep Learning Models need a lot of data to give a good result, unfortunately ear datasets available publicly are not very large and thus CNN simulations are being carried out on other state of the art datasets related to this research for evaluation of the model.





Past Defense Notices


RUXIN XIE - Single-fiber-laser-based-multimodal coherent Raman System

PhD Dissertation Defense (EE)

When & Where:
April 21, 2016
2:30 pm
250 Nichols Hall
Committee Members:
Ron Hui, Chair
Chris Allen
Shannon Blunt
Victor Frost
Carey Johnson*

Abstract: [ Show / Hide ]
Coherent Raman scattering (CRS) is an appealing technique for spectroscopy and microscopy, due to its selectivity and sensitivity. We designed and built single-fiber-laser-based coherent Raman scattering spectroscopy and microscopy system which can automatically maintain frequency synchronization between pump and Stokes beam. The Stokes frequency shift is generated by soliton self-frequency shift (SSFS) through a photonic crystal fiber. The impact of pulse chirping on the signal power reduction of coherent anti-Stokes Raman scattering (CARS) and stimulated Raman scattering (SRS) have been investigate through theoretical analysis and experiment.

Our multimodal system provides measurement diversity among CARS, SRS and photothermal, which can be used for comparison and offering complementary information. Distribution of hemoglobin in human red blood cells and lipids in sliced mouse brain sample have been imaged. Frequency and power dependency of photothermal signal is characterized.
Based on the polarization dependency of the third-order susceptibility of the material, the polarization switched SRS method is able to eliminate the nonresonant photothermal signal from the resonant SRS signal. Red blood cells and sliced mouse brain samples were imaged to demonstrate the capability of the proposed technique. The result shows that polarization switched SRS removes most of the photothermal signal.



MAHITHA DODDALA - Properties of Probabilistic Approximations Applied to Incomplete Data

MS Project Defense (CS)

When & Where:
March 25, 2016
11:00 am
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, Chair
Man Kong
Bo Luo

Abstract: [ Show / Hide ]
The main focus of the project is to discuss mining of incomplete data which we find frequently in real-life records. For this, I considered the probabilistic approximations as they have a direct application to mining incomplete data. I have examined the results obtained from the experiments conducted on eight real-life data sets taken from University of California at Irvine Machine Learning Repository. I also investigated the properties of singleton, subset, and concept approximations and corresponding consistencies. The main objective was to compare the global and local approximations and generalize the consistency definition for incomplete data with two interpretations of missing attribute values: lost values and "do not care" conditions. In addition to this comparison, the most useful approach among singleton, subset and concept approximations is also tested for which the conclusion is the best approach would be selected with the help of tenfold cross validation after applying all three approaches. Also it’s shown that even if there exist six types of consistencies, there are only four distinct consistencies of incomplete data as two pairs of such consistencies are equivalent.



ROHIT YADAV - Automatic Text Summarization of Email Corpus Using Importance of Sentences

MS Project Defense (CS)

When & Where:
March 15, 2016
11:00 am
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo

Abstract: [ Show / Hide ]
With the advent of Internet, the data being added online have been increasing at an enormous rate. Though search engines use information retrieval (IR) techniques to facilitate the search requests from users, the results may not always be effective or the efficiency of results according to a search query may not be high. The user has to go through certain web pages before getting at the web page he/she needs. This problem of information overload can be solved using automatic text summarization. Summarization is a process of obtaining an abridged version of documents so that user can have a quick understanding of the document. A new technique to produce a summary of an original text is investigated in this project.
Email threads from the World Wide Web consortium’s sites (W3C) corpus are used in this system.Our system is based on identification and extraction of important sentences from the input document. Apart from common IR features like term frequency and inverse document frequency, novel features such as Term Frequency-Inverse Document Frequency,subject words, sentence position and thematic words have also been implemented. The model consists of four stages. The pre-processing stage converts the unstructured (all those things that can't be so readily classified) text into structured (any data that resides in a fixed field within a record or file). In the first stage each sentence is partitioned into the list of tokens and stop words are removed. The second stage is to extract the important key phrases in the text by implementing a new algorithm through ranking the candidate words. The system uses the extracted keywords/key phrases to select the important sentence. Each sentence is ranked depending on many features such as the existence of the keywords/key phrase in it, the relation between the sentence and the title by using a similarity measurement and other many features. The third stage of the proposed system is to extract the sentences with the highest rank. The fourth stage is the filtering stage where sentences from email threads are ranked as per features and summaries are generated. This system can be considered as a framework for unsupervised learning in the field of text summarization.



ARJUN MUTHALAGU - Flight Search Application

MS Project Defense (CS)

When & Where:
March 15, 2016
9:00 am
250 Nichols Hall
Committee Members:
Prasad Kulkarni, Chair
Andy Gill
Jerzy Grzymala-Busse

Abstract: [ Show / Hide ]
“Flight-search” application is an Angular JS application implemented in a client side architecture. The application displays the flight results from different airline companies based on the input parameters. The application also has custom filtering conditions and custom pagination, which a user can interact with to filter the result and also limit the results displayed in the browser. The application uses QPX Express API to pull data for the flight searches.



SATYA KUNDETI - A comparison of Two Decision Tree Generating Algorithms: C4.5 and CART Based on Numerical Data

MS Project Defense (CS)

When & Where:
February 29, 2016
11:00 am
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, Chair
Luke Huan
Bo Luo

Abstract: [ Show / Hide ]
In Data Mining, classification of data is a challenging task. One of the most popular techniques for classifying data is decision tree induction. In this project, two decision tree generating algorithms CART and C4.5, using their original implementations, are compared on different numerical data sets, taken from University of California Irvine (UCI). The comparative analysis of these two implementations is carried out in terms of accuracy and decision tree complexity. Results from experiments show that there is statistically insignificant difference(5% level of significance, two-tailed test)between C4.5 and CART in terms of accuracy. On the other hand, decision trees generated by C4.5 and CART have significant statistical difference in terms of their complexity.



NAGA ANUSHA BOMMIDI - The Comparison of Performance and Complexity of Rule Sets induced from Incomplete Data

MS Project Defense (CS)

When & Where:
February 12, 2016
3:00 pm
317 Nichols Hall
Committee Members:
Jerzy Grzymala-Busse,Chair
Andy Gill
Prasad Kulkarni

Abstract: [ Show / Hide ]
The main focus of this project is to identify the best interpretation of missing attribute values in terms of performance and complexity of rule sets. This report summarizes the experimental comparison of the performance and the complexity of rule sets induced from incomplete data sets with three interpretations of missing attribute values: lost values, attribute-concept values, and “do not care” conditions. Furthermore, it details the experiments conducted using MLEM2 rule induction system on 176 data sets, using three kinds of probabilistic approximations: lower, middle and upper. The performance was evaluated using the error rate computed by ten-fold cross validation, and the complexity of rule sets was evaluated based the size of the rule sets and the number of conditions in the rule sets. The results showed that lost values were better in terms of the performance in 10 out of 24 combinations. In addition, attribute-concept values were better in 5 out of 24 combinations, and “do not care” conditions were better in 1 combination in terms of the complexity of rule sets. Furthermore, there was not even one combination of dataset and type of approximation for which both performance and complexity of rule sets were better for one interpretation of missing attributes compared to the other two.



BLAKE BRYANT - Hacking SIEMS to Catch Hackers: Decreasing the Mean Time to Respond to Security Incidents with a Novel Threat Ontology in SIEM Software

MS Thesis Defense (IT)

When & Where:
February 12, 2016
2:00 pm
2012 BEST
Committee Members:
Hossein Saiedian, Chair
Bo Luo
Gary Minden

Abstract: [ Show / Hide ]
Information security is plagued with increasingly sophisticated and persistent threats to communication networks. The development of new threat tools or vulnerability exploits often outpaces advancements in network security detection systems. As a result, detection systems often compensate by over reporting partial detections of routine network activity to security analysts for further review. Such alarms seldom contain adequate forensic data for analysts to accurately validate alerts to other stakeholders without lengthy investigations. As a result, security analysts often ignore the vast majority of network security alarms provided by sensors, resulting in security breaches that may have otherwise been prevented.

Security Information and Event Management (SIEM) software has been introduced recently in an effort to enable data correlation across multiple sensors, with the intent of producing a lower number of security alerts with little forensic value and a higher number of security alerts that accurately reflect malicious actions. However, the normalization frameworks found in current SIEM systems do not accurately depict modern threat activities. As a result, recent network security research has introduced the concept of a "kill chain" model designed to represent threat activities based upon patterns of action, known indicators, and methodical intrusion phases. Such a model was hypothesized by many researchers to result in the realization of the desired goals of SIEM software.

The focus of this thesis is the implementation of a "kill chain" framework within SIEM software. A novel "Kill chain" model was developed and implemented within a commercial SIEM system through modifications to the existing SIEM database. These modifications resulted in a new log ontology capable of normalizing security sensor data in accordance with modern threat research. New SIEM correlation rules were developed using the novel log ontology compared to existing vendor recommended correlation rules using the default model. The novel log ontology produced promising results indicating improved detection rates, more descriptive security alarms, and a lower number of false positive alarms. These improvements were assessed to provide improved visibility and more efficient investigation processes to security analysts ultimately reducing the mean time required to detect and escalate security incidents.





SHAUN CHUA - Implementation of a Multichannel Radar Waveform Generator System Controller

MS Project Defense (EE)

When & Where:
February 9, 2016
10:00 am
317 Nichols Hall
Committee Members:
Carl Leuschen, Chair
Chris Allen
Fernando Rodriguez-Morales

Abstract: [ Show / Hide ]
Waveform generation is crucial in a radar system operation. There is a recent need for an 8 channel transmitter with high bandwidth chirp signals (100 MHz – 600 MHz). As such, a waveform generator (WFG) hardware module is required for this purpose. The WFG houses 4 Direct Digital Synthesizers (DDS), and an ALTERA Cyclone V FPGA that acts as its controller. The DDS of choice is the AD9915, because its Digital to Analog Converter can be clocked at a maximum rate of 2.5 GHz, allowing plenty of room to produce the high bandwidth and high frequency chirp signals desired, and also because it supports synchronization between multiple AD9915s.

The brains behind the DDS operations are the FPGA and the radar software developed in NI LabVIEW. These two aspects of the digital systems grants the WFG highly configurable waveform capabilities. The configurable inputs that can be controlled by the user include: number of waveforms in a playlist, start and stop frequency (bandwidth of chirp signal), zero-pi mode, and waveform amplitude and phase control.

The FPGA acts as a DDS controller that directly configures and control the DDS operations, while also managing and synchronizing the operations of all DDS channels. This project details largely the development of such a controller, named Multichannel Waveform Generator (MWFG) Controller, and the necessary modifications and development in the NI LabVIEW software, so that they complement each other.




DEEPIKA KOTA - Automatic Color Detection of Colored Wires In Electric Cables

MS Project Defense (EE)

When & Where:
February 1, 2016
10:30 am
2001B Eaton Hall
Committee Members:
Jim Stiles, Chair
Ron Hui
James Rowland

Abstract: [ Show / Hide ]
An automatic Color detection system checks for the sequence of colored wires in electric cables which are ready to get crimped together. The system inspects for flat connectors with differs in type and number of wires.This is managed in an automatic way with a self learning system without any requirement of manual input from the user to load new data to the machine. The system is coupled to a connector crimping machine and once the system learns the actual sample of cable order , it automatically inspects each cable assembled by the machine. There are three methodologies based on which this automatic detection takes place 1) A self learning system 2) An algorithm for wire segmentation to extract colors from the captured images 3) An algorithm for color recognition to cope up with wires with different illuminations and insulation .The main advantage of this system is when the cables are produced in large batches ,it provides high level of accuracy and prevents false negatives in order to guarantee defect free production.



MOHAMMED ZIAUDDIN - Open Source Python Widget Application to Synchronize Bibliographical References Between Two BibTeX Repositories

MS Project Defense (CS)

When & Where:
February 1, 2016
10:00 am
246 Nichols Hall
Committee Members:
Andy Gill, Chair
Perry Alexander
Prasad Kulkarni

Abstract: [ Show / Hide ]
Bibtex is a tool to edit and manage bibliographical references in a document.Researchers face a common problem that they have one copy of their bibliographical reference databases for a specific project and a master bibliographical database file that holds all their bibliographical references. Syncing these two files is an arduous task as one has to search and modify each reference record individually. Most of the bibtex tools available either provide help in maintaining bibtex bibliographies in different file formats or searching for references in web databases but none of them provide a way to synchronize the fields of the same reference record in the two different bibtex database files.
The intention of this project is to create an application that helps academicians to keep their bibliographical references in two different databases in sync. We have created a python widget application that employs the Tkinter library for GUI and unQLite database for data storage. This application is integrated with Github allowing users to modify bibtex files present on Github.



HARISH ROHINI - Using Intel Pintools to Analyze Memory Access Patterns

MS Project Defense (CS)

When & Where:
January 29, 2016
2:00 pm
246 Nichols Hall
Committee Members:
Prasad Kulkarni, Chair
Andy Gill
Heechul Yun

Abstract: [ Show / Hide ]
Analysis of large benchmark programs can be very difficult because of their changes in memory state for every run and with billions of instructions the simulation of a whole program in general can be extremely slow. The solution for this is to simulate only some selected regions which are the most representative parts of a program, So that we can focus our analysis and optimizations on those particular regions which represent more part of the execution of a program. In order to accomplish that, we use intel’s pintool, a binary instrumentation framework which performs program analysis at run time, simpoint to get the most representative regions of a program and pinplay for the reproducible analysis of the program. This project uses these frameworks to simulate and analyze programs to provide various statistics about the memory allocations, memory reference traces, allocated memory usage across the most representative regions of the program and also the cache simulations of the representative regions.



GOVIND VEDALA - Iterative SSBI Compensation in Optical OFDM Systems and the Impact of SOA Nonlinearities

MS Project Defense (EE)

When & Where:
January 28, 2016
2:00 pm
246 Nichols Hall
Committee Members:
Ron Hui, Chair
Chris Allen
Erik Perrins

Abstract: [ Show / Hide ]
Multicarrier modulation using Orthogonal Frequency Division Multiplexing (OFDM) is a best fit candidate for the next generation long-haul optical transmission systems, offering high degree of spectral efficiency and easing the compensation of linear impairments such as chromatic dispersion and polarization mode dispersion, at the receiver. Optical OFDM comes in two flavors – coherent optical OFDM (CO-OFDM) and direct detection optical OFDM (DD-OFDM), each having its own share of pros and cons. CO-OFDM is highly robust to fiber impairments and imposes a relaxation on the electronic component bandwidth requirements, but requires narrow linewidth lasers, optical hybrids and local oscillators. On the other hand DD-OFDM has relaxed laser linewidth requirement and low complexity receiver making it an attractive multicarrier system. However, DD-OFDM system suffers from signal-signal beat interference (SSBI), caused by mixing among the sub-carriers in the photo detector, which deteriorates the system performance. Previously, to mitigate the effect of SSBI, a guard band was used between optical carrier and data sideband. In this project, we experimentally demonstrate a linearly field modulated virtual single sideband OFDM (VSSB-OFDM) transmission with direct detection and digitally compensate for the SSBI using an iterative SSBI compensation algorithm.
Semiconductor optical amplifiers (SOA), with their small footprint, ultra-high gain bandwidth, and ease of integration, are attracting the attention of optical telecommunication engineers for their use in high speed transmission systems as inline amplifiers. However, the SOA gain saturation induced nonlinearities cause pulse distortion and induce nonlinear cross talk effects such as cross gain modulation especially in Wavelength Division Multiplexed systems. In this project, we also evaluate the performance of iterative SSBI compensation in an optical OFDM system, in the presence of these SOA induced nonlinearities.



KEERTHI GANTA - TCP Illinois Protocol Implementation in ns-3

MS Project Defense (EE)

When & Where:
January 27, 2016
1:00 pm
250 Nichols Hall
Committee Members:
James Sterbenz, Chair
Victor Frost
Bo Luo

Abstract: [ Show / Hide ]
The choice of congestion control algorithm has an impact on the performance of a network. The congestion control algorithm should be selected and implemented based on the network scenario in order to achieve better results. Congestion control in high speed networks and networks with large BDP is proved to be more critical due to the high amount of data at risk. There are problems in achieving better throughput with conventional TCP in the above mentioned scenario. Over the years conventional TCP is modified to pave way for TCP variants that could address the issues in high speed networks. TCP Illinois is one such protocol for high speed networks. It is a hybrid version of a congestion control algorithm as it uses both packet loss and delay information to decide on the window size. The packet loss information is used to decide on whether to increase or decrease the congestion window and delay information is used to assess the amount of increase or decrease that has to be made.



ADITYA RAVIKANTI - sheets-db: Database powered by Google Spreadsheets

MS Project Defense (CS)

When & Where:
January 27, 2016
10:00 am
2001B Eaton Hall
Committee Members:
Andy Gill, Chair
Perry Alexander
Prasad Kulkarni

Abstract: [ Show / Hide ]
The sheets-db library is a Haskell binding to Google Sheets API. sheets-db allows Haskell users to utilize google spread sheets as a light weight database. It provides various functions to create, read, update and delete rows in spreadsheets along with a way to construct simple structured queries.




NIRANJAN PURA VEDAMURTHY - Testing the Accuracy of Erlang Delay Formula for Smaller Number of TCP Flows

MS Project Defense (CoE)

When & Where:
January 27, 2016
8:00 am
246 Nichols Hall
Committee Members:
Victor Frost, Chair
Gary Minden
Glenn Prescott

Abstract: [ Show / Hide ]
The Erlang delay formula for dimensioning different networks is used to calculate the probability of congestion. Testing the accuracy of a probability of congestion found using the Erlang formula against the simulation for probability of packet loss is demonstrated in this project. The simulations are done when TCP traffic is applied through one bottleneck node. Three different source traffic models having small number of flows is considered. Simulations results for three different source traffic models is shown in terms of probability of packet loss and load supplied to the topology. Various traffic parameters are varied in order to show the impact on the probability of packet loss and to compare with the Erlang prediction for probability of congestion.



MAHMOOD HAMEED - Nonlinear Mixing in Optical Multicarrier Systems

PhD Dissertation Defense (EE)

When & Where:
January 14, 2016
2:00 pm
246 Nichols Hall
Committee Members:
Ron Hui, Chair
Shannon Blunt
Erik Perrins
Alessandro Salandrino
Carey Johnson*

Abstract: [ Show / Hide ]
Efficient use of the vast spectrum offered by fiber-optic links by an end user with relatively small bandwidth requirement is possible by partitioning a high speed signal in a wavelength channel into multiple low-rate subcarriers. Multicarrier systems not only ensure efficient use of optical and electrical components, but also tolerate transmission impairments. The purpose of this research is to experimentally understand and minimize the impact of mixing among subcarriers in Radio-Over-Fiber (RoF) and direct detection systems, involving a nonlinear component such as a semiconductor optical amplifier. We also analyze impact of clipping and quantization on multicarrier signals and compare electrical bandwidth utilization of two popular multiplexing techniques in orthogonal frequency division multiplexing (OFDM) and Nyquist modulation.
For an OFDM-RoF system, we present a novel technique that minimizes the RF domain signal-signal beat interference (SSBI), relaxes the phase noise requirement on the RF carrier, realizes the full potential of the optical heterodyne technique, and increases the performance-to-cost ratio of RoF systems. We demonstrate a RoF network that shares the same RF carrier for both downlink and uplink, avoiding the need of an additional RF oscillator in the customer unit.
For direct detection systems, we first experimentally compare performance degradations of coherent optical OFDM and single carrier Nyquist pulse modulated systems in a nonlinear environment. We then experimentally evaluate the performance of signal-signal beat interference (SSBI) compensation technique in the presence of semiconductor optical amplifier (SOA) induced nonlinearities for a multicarrier optical system with direct detection. We show that SSBI contamination can be removed from the data signal to a large extent when the optical system operates in the linear region, especially when the carrier-to-signal power ratio is low.



SUSOBHAN DAS - Tunable Nano-photonic Devices

PhD Comprehensive Defense (EE)

When & Where:
January 12, 2016
10:00 am
246 Nichols Hall
Committee Members:
Ron Hui, Chair
Alessandro Salandrino, Co-Chair
Chris Allen
Jim Stiles
Judy Wu*

Abstract: [ Show / Hide ]
In nano-photonics, the control of optical signals is based on tuning of the material optical properties in which the electromagnetic field propagates, and thus the choice of materials and of the physical modulation mechanism plays a crucial role. Several materials such as graphene, Indium Tin Oxide (ITO), and vanadium di-oxide (VO2) investigated here have attracted a great deal of attention in the nanophotonic community because of their remarkable tunability. This dissertation will include both theoretical modeling and experimental characterization of functional electro-optic materials and their applications in guided-wave photonic structures.
We have characterized the complex index of graphene in near infrared (NIR) wavelength through the reflectivity measurement on a SiO2/Si substrate. The measured complex indices as the function of the applied gate electric voltage agreed with the prediction of the Kubo formula.
We have performed the mathematical modeling of permittivity of ITO based on the Drude Model. Results show that ITO can be used as a plasmonic material and performs better than noble metals for applications in NIR wavelength region. Additionally, the permittivity of ITO can be tuned by carrier density change through applied voltage. An electro-optic modulator (EOM) based on plasmonically enhanced graphene has been proposed and modeled. We show that the tuning of graphene chemical potential through electrical gating is able to switch on and off the ITO plasmonic resonance. This mechanism enables dramatically increased electro-absorption efficiency.
Another novel photonic structure we are investigating is a multimode EOM based on the electrically tuned optical absorption of ITO in NIR wavelengths. The capability of mode-multiplexing increases the functionality per area in a nanophotonic chip. Proper design of ITO structure based on the profiles of y-polarized TE11 and TE21 modes allows the modulation of both modes simultaneously and differentially.
We have experimentally demonstrated the ultrafast changes of optical properties associated with dielectric-to-metal phase transition of VO2. This measurement is based on a fiber-optic pump-probe setup in NIR wavelength. Instantaneous optical phase modulation of the probe was demonstrated during pump pulse leading edge, which could be converted into an intensity modulation of the probe through an optical frequency discriminator