X-Ray Computer Tomography, known as CT, is one of the main tools Doctors use today to examine patients. In every main hospital, there are CT machines that work 24/7 to provide Doctors with in-depth views of the human body. CT Scans enable them to save lives on a daily basis. Today, 3rd generation CTs can scan over 100 patients a day, but their radiation dose is relatively high. For instance, an abdomen CT scan radiation dose is equivalent to almost a 1000 chest Xray scans.It is highly desirable to reduce the radiation dose, while preserving the resolution and details of the scans. This is where we come in. In this project the students will first study the basics of computer aided tomography, advanced mathematical imaging tools and cutting edge techniques in digital signal processing. The students will then research, design and implement a novel reconstruction algorithm for CT scans, which could allow faster scan times and reduced radiation dosage. The basis for this algorithm will implement principles from novel fields such as "Compressed Sensing" and "Super Resolution by Dictionary Learning".
For 120 years, people believed that Abbe's diffraction law in far field optics, which sets a minimal separation distance for two adjacent objects in an image, cannot be circumvented. 2014 stood out as a celebrated year for super-resolution microscopy, in which three Nobel Prize winners proved us wrong and were acknowledged for their contribution to super-resolution optical fluorescence microscopy. Today, super-resolution imaging techniques such as STORM and PALM enable biologists and other researchers see beyond the diffraction limit and observe intra-cellular entities and dynamics within living cells. Due to several limitations within these techniques, a new technique termed SOFI emerged, which utilizes not only spatial information, but also temporal information in order to construct super-resolution images of cells.
Doppler ultrasound is a non-invasive and safe modality that is used for the estimation of blood velocities by transmitting high-frequency sound waves (ultrasound) and analyzing the signals reflected from circulating red blood cells. Doppler scans help diagnose many conditions, including: heart valve defects and congenital heart disease, artery occlusions and aneurysms. Classic Doppler processing methods do not make use of the underlying structure in the reflected signals in order to reduce the sampling rate or improve the estimation quality. Therefore, multitudes of ultrasound measurements are needed in order to produce reliable velocity estimation for each location and around each time point. In this project the efficient representation of ultrasound Doppler signals will be investigated with the application of sub-nyquist sampling. Validation will be performed using numerical simulations and phantom scans.
Most traffic lights in Israel operate according to a fixed and predetermined schedule. As vehicle sensing solutions (e.g., traffic cameras and road sensors) have become fairly common, it is now possible to design more intelligent and adaptive policies for traffic lights, with the potential of significantly reducing road congestion. The design of such policies however, requires the solution of difficult control problems with large state spaces. This project applies a Reinforcement Learning (RL) algorithm to the problem of traffic signal control. We used an open-source traffic simulator (GLD), and compared the Q-learning and SARSA RL algorithms to several hand-designed policies. Function approximation was used to overcome the problems of the large state space. This approach was proposed in a recent paper by Prashanth and Bhatnagar, and we relied on their results, and introduced some improvements in the features, dynamics, and performance criteria. We present simulations of a road system based on the Horev junction in Haifa, and show that the RL approach out performs the heuristic solutions.
In the past few years, due to the miniaturization of technology, depth sensing cameras have become a commodity. The release of Kinect revealed a marketable depth camera with relatively high resolution and frame-rate which acts as enormous potential to advance the field of image research using depth information.That said, there is still a large gap between the resolution and frame-rate capabilities of regular RGB cameras and of the Kinect. In order to overcome this weakness a number of methods [1,2,3,4] were proposed to improve the Kinect spatial and temporal resolution using a coupled RGB camera. With regard to the spatial resolution the basic idea is to use the coupled RGB camera to perform a filtering which matches the color of a pixel to the appropriate depth of the object presented by this color. The basic assumptions are that different objects will have naturally different colors, and that object boundaries can be sensed with much better precision using the color camera than with the depth camera which has known difficulties regarding edge pixels. In this fashion it is possible to enlarge and refine images received from a low resolution depth camera with the aid of a high resolution RGB camera. In this project we will implement state-of-the-art techniques to improve the spatial resolution of the Kinect depth camera. A number of different methods will be implemented and we will discuss their limitations. In addition we will attempt to overcome any limitations encountered when possible.
Modern processors have the ability to expose internal performance measurements, such as L1 cache miss or branch misprediction to the advanced user. This is done by calling OS kernel-supported system calls, which cause dedicated registers, called "performance counters", to expose different performance measures simultaneously, in resolutions of single cycles. Each phase is characterized by its own different temporal behavior, and it turns out that each phase corresponds to a different loop in the code. In this project we define and tackle 2 problems - phase detection and phase prediction. For the former we formulate a statistical tool - hypothesis testing - which determines when a program phase has changed by sampling one or more specific performance counters and calculating the appropriate statistical estimators. After detection has been done, the latter - phase prediction - is performed. That is done after learning the history of past phases, and using this history to predict future phases. To test our results, we both generate synthetic data and obtain real data using the SPEC2006 benchmark suite. The end result is a satisfying ability for both detection and prediction of the different phases.
This paper describes a real-time prototype system for monitoring eyelid motion. The main components of the system consists of a tiny magnet located on the upper eyelid, a specially designed hardware system for real-time signal acquisition and dedicated computer software for analyzing real-time and off-line modes. Eyelid movement is one of the visual behaviors that reflects a person’s medical issues. The full system should allow characterizing various bio-behaviors for future diagnosis.