Return to page

Human Activity Detection Using WiFi Signals and Deep NetworksGrandmaster Panel

 

Read the Full Transcript

 

 

SK Reddy:

 

I want to talk about how to make sense out of signals and adding. With artificial intelligence, can we actually develop a model that can make sense? What is the sense here? Human activity detection. In other words, what are the human beings doing in the room and using artificial intelligence to predict, guess, or identify. It's actually been going on for some time. Before I get into the topic, my name is SK. I have a long name. I work for Digitalist. We are into deep learning and that's it in terms of marketing.

 

Are These Photos Real?

 

Before we go any further, if you have seen this picture, please don't answer the question. If you're not, tell me. Each picture has a number. You can't see it. 1, 2, 3, 4, 5, 6. Which of these, one or more is a fake picture? In other words, not a real human being.

 

Audience Member:

 

1 and 6.

 

SK Reddy:

 

More than 6. 1 and 6. Okay.

 

Audience Member:

 

Two.

 

SK Reddy:

 

Two

 

Audience Member:

 

And 5.

 

SK Reddy:

 

And 5. Okay. Maybe I should ask you which one is not a fake. Which was a real human being? Come on. You're not going to be penalized. Sorry?

 

Audience Member:

 

They all look kind of fake.

 

SK Reddy:

 

They all look fake? So you said it was all fake. You said it was all fake. You said a couple of guys are fake. Did you say it was all fake? Fake picture. Sorry?

 

Audience Member:

 

You're asking if it's for real people, right? They're obviously two dimensional representations.

 

SK Reddy:

 

The pictures of real people.

 

Audience Member:

 

Not photoshop?

 

SK Reddy:

 

Not Photoshop. All of these pictures are fake. These human beings don't exist. These are made by machines. AI machines. It was actually amazing. Nvidia posted it. There's a link right there. You can go on and check the Nvidia site. After 16 hours of training, AI machines could figure it out. The last two years, especially 2016 and 17, have been a golden era for image processing in artificial intelligence,

Whether it's still images or videos. Lots and lots of companies have actually developed solutions that will process the videos and figure out what's happening. For example, I read a paper a couple of months ago. Published, I think, around 6 months ago by an Israeli University which will bring a static image to life. I know you might have already seen it on your iphones and regular phones. But the way that this model brings the static image to life is if you post it on your Facebook profile. If your friend looks at the picture, the picture smiles. If your wife, boyfriend, or girlfriend looks at it, maybe it does something naughtier. Funny. If your kids or siblings look at the picture, it actually does something which is more appropriate. Whatever is appropriate there. This is not just a couple of wrinkles on the face. This is a complete transformation of a still image, which makes you believe that this picture has been taken the way it's doing now. That's one example of course. There's a lot of other examples in image processing, but there are problems in image processing using AI models.

 

Concerns With AI Models

 

Privacy is a big concern. People don't want their pictures to be taken. Number one. Number two, line of site is an issue. If you want, let's say, your image processing model on a surveillance camera or a drone, the camera has to actually have sight of the thing or a picture you want to take. Otherwise, the model does not know what you're doing. Number 3, what if it's dark? I know of a company in Berkeley which is processing images of Alzheimer's patients to find out when a patient is slowly sitting on the bed or slowly falling onto the bed. The difference between sitting on a bed and falling onto a bed by an Alzheimer's patient, apparently, only a trained medical doctor or a nurse can figure out. They're working on a model which can develop a model, which can figure out when the patient is actually in control or not in control. Then you can bring in a nurse to take care of the patient if this needs attention. All of these situations, which have actually been working for the last year and a half and two years, are becoming problematic because of privacy and so many other concerns.

Wifi signals being used to track human activity has been going on for some time. Especially custom made sensors or radars being used. Even as simple as if you have a Fitbit or an Apple Watch, which actually tracks your number of steps and all of the stuff. This has been done for some time. Using wifi signals commercially off the shelf router wifi signals and predicting human activity is something which has been happening very recently. This research I'm going to talk about today. I'm actually going to share a few papers about what has been published. This is so recent. Some of the earliest papers where people used different types of signals were done with very rudimentary applications without AI and with AI. But when I say earliest, that's as early as just 2014 or 2015. 2016 and 17 is when people have started doing a little more stronger research using wifi signals. 2017, I think late around the last 6 months ago, people started using deep networks to process wifi signals.

 

Wifi Signals Use Cases

 

What are some of the use cases people have been using wifi signals for? There are papers available, which will talk about models being developed using wifi signals to track the size of your indoor stadium temperature, lighting control, or even as unique as if someone is speaking wifi signals can understand what you're speaking. Without even doing an audio recording of it. The lip movement. There is a fantastic paper published, which does lip reading, but that's more of an image processing solution. If you guys want to read that, I think you should look for lip reading. In image processing using AI. They even have a YouTube video, which actually does the demo. Where if you mute a television news reader, the model will figure out what the person is talking and actually type the text or does the audio for that.

 

That's image processing. In this case, they've used wifi signals to actually predict what the person is speaking. Of course, the majority of the research in wifi signal processing has been happening in the healthcare sector. Trying to find out when the patient is walking, sitting, lying down, or even to the extent of breathing. When I read the paper it was very amazing. I'll talk about some of those papers and take you through the history of what happened in wifi signal. The research I'm going to talk about today. It is still happening. I don't think it has been adopted by the industry. Especially wifi signals in the context of deep learning models. I don't think it's happening. Again, it's my limited awareness. In academic circles, papers are being published using wifi signals. Initially, 3, 4 years ago, they were using RSR signals, (received signal strength) but offload for the last year and a half and two years, people have been using different types of signals within wifi routers.

 

Also, the trend is going towards how we can use commercially available wifi routers without making any hardware changes so that you already have them in houses, indoors, offices, or stadiums. Also, the recent trend is to use deep learning models. Of course a couple of papers I will show, which have used the regular machine learning models. Where some sort of classification machine learning classification models were used, but they're not very effective. I'll talk about some of the problems still not answered in the wifi signal processing industry. Which is what is stopping or slowing down AI research. I guess in the next couple of years, you will see a tremendous number of papers published. The reason I'm giving you this context is the research I'm going to show today is not complete. It's not comprehensive yet. I don't have the awareness myself in terms of what is the right solution for some of the problems I'm going to talk about. This topic of using wifi signals to predict human activity has been so recent, but at the same time has such great potential. I personally believe in the next two years, you'll find lots more research and a lot more adoption by the industry to take it further and show results.

 

Types of Wifi Signals

 

A couple of years ago when the wifi signal processing was being introduced lots of academic institutions used RSA signals (received signal strength) which is more of a reflection of the power of the signal strength. On top, people have used a few classification models. The problem in that process is the signal strength is never consistent. It actually gets affected by so many other external factors, including of course, the temperature, humidity, and air pressure, and all of that stuff too. More importantly, the signal which the router is emitting is never consistently strong enough. People actually moved to something else called channel state information. When you actually have a signal that comes from a router, so you have a transcripter and receiver, if you use a MIMO (that is multiple input, multiple output system).

 

If you have multiple antennas on a receiver, each receiver receives the same packet. If it's an OFDM system, you actually have 30 sub carriers. In other words, every antenna is transmitting the same packet using 56 sub carriers, but you actually have access to 30 sub carriers. The same packet is being transmitted by antenna 1 of the transmitter, and received by antenna 1, two, or 3 of the receiver. The same package is received. The way you model the receiver packet versus the transmitter packet. I'll show you a mathematical equation on how to figure out the channel state information of a transmitted packet. The reason there's more research happening in using CSI is, the CSI data comes closer from the physical layer of your router. Hence, the signal is relatively consistent. At the same time, the impact of external factors, even though there is impact especially on the amplitude because of the obstruction of these signals by objects in the room or moving human beings. The information of the CSI information is relatively stable and standard for people to get the data, process the data, and see if we can make sense out of it.

 

ODFM

 

Audience Member:

 

Yeah. You said something about OFDM?

 

SK Reddy:

 

OFDM. There you go. OFDM. Yeah, that's it. These are, that's the system that's being used in the modern wifi routers to let, the packet being sent from multiple antennas in multiple sub carriers, so that the transmitter will receive at least a few or all of the packets. So that you don't miss any of the packets.

 

Channel State Information

 

In CSI, CSI actually is composed of amplitude and phase. When you have a packet that transfers that starts from a transmitter to a receiver, it actually has an amplitude. It actually has a wave, it's an amplitude, and also has a phase. Information amplitude does get impacted a little bit because of obstruction, multi-path, or sometimes even fading because over a period of time based on the distance, it actually starts fading too. That gets a little bit of an impact, not as much. Phase is, even though it's not that much impacted. What normally happens is because of the manufacturing defect or manufacturing situation, there is lots of noise that comes in along with the wave. When you capture the CSI information about a packet at the receiver antenna, it actually has lots of noise too. If the, let's say, the received packet is denoted by yi and the center or the transmitted packet is xi. By the time it receives at the antenna, the receiver, it goes through a transformation which is impacted by something called hi, which is nothing else but the channel state information in this case. Plus some noise.

 

This is the most fundamental equation I want you guys to take away today. yi=Hixi+ni and i is the noise. If you can reduce the noise, then yi is directly related to xi. That is the received packet is directly related to the sent packet, but influenced by the channel state information. Capturing channel state information has become almost very easy now, because some of the NICs, (the network cards) available in some of the laptops can each easily capture the information. I am more of a deep planning guy. I'm more of an AI person, less of a signal processing person. My effort is to see how I can capture the data once I got the data, I would rather jump in and start doing the AI models. I will explain some of the papers people have published in processing this any deeper signal processing questions.

 

I think I can get back to you later on. We can talk about the use of this data to develop deep models. Towards the last, I think I'll talk about a real deep model. Which 1 paper is published, but I'll also talk about my own research not published. You can take it, I know it's being really recorded, but I'll try to be vague enough so that you can't use the information as is. There are a couple of ideas which I'm working on. Which uses deep models, both LSTM, CNN's, and using CSI data and makes most sense out of it. You have a question

 

Variables In The yi=Hixi+ni Equation

 

Audience Member:

 

The N column and the H matrix.

SK Reddy:

 

The number H(1) to H(30), these are the number of sub carriers. What was the question? Actually, maybe we missed a question. Hi is the matrix.

 

Audience Member:

 

Is H, the column?

 

SK Reddy:

 

Hi is the matrix, but the i is more of which denotes the number of carriers that's coming in. So if you, let's say, if you have a number of antennas your H metrics will be. For each antenna, you actually have so much sub-carrier information. This H(1) would be the CSI information of that specific packet coming from antenna A to antenna B. For every packet, you actually have a matrix of an number of antennas times 30 metrics that comes in for each packet. Let's say, if you have around a thousand packets, the same packet that goes from transmitter to receiver, then you have so much data that's come in for each packet. If the location of your transmitter and your location of your receiver don't change, the information that you're getting is consistent enough. Then you can use that data to make extrapolation and feed it into your deep learning models to actually start making sense.

 

Audience Member:

 

I didn't understand that equation nor the matrix. So S(32,30) in that thing.

 

SK Reddy:

 

S is the number of, I think, the number of antennas. Number of antennas you have. If it's a MIMO, if you have a 3 by 3 antenna, you have 3 transmitters and 3 receivers. The number 30 is the number of subcarriers for each transmitter receiver pair. Let's say, if you have 3 transmitter antennas and 3 receiver antennas, you actually have every. That is the first antenna of your transmitter will send packets and be received by all the 3 antennas of the receiver. If it's a 3 by 3, sorry?

 

Audience Member:

 

There are 9 pairs?

 

SK Reddy:

 

Those are 9 pairs. Yeah and if each of the packets that goes in has 30 sub carriers. So you actually get information off the same packet from 30 different sub carriers. The small xi would be the CSI of that packet for that transmitter pair. If you want to have the entire CSI for a single packet for the MIMO 3 by 3, then you'll get a matrix like this. That's for 1 packet. Let's say, if you want to collect lots of packets, then you have a sort of a tensor multi more than two-3 dimensional matrix information for that set of packets. Let's say, if you have a lab setting where you have a transmitter and receiver. If you want to track the walking of a human being and you want to develop a model that predicts whether a person is walking or not walking. Just to make it simple, two binary situations, walking or not walking. Then you can let the person walk for X number of minutes and not walk for Y number of minutes and capture the number of packets you have transmitted during that duration, and find the CSI value for each packet.

 

You have a CSI packet that's coming in consistently over a period of time. If you break that time into, let's say, a few milliseconds. Let's say, 20 milliseconds, 50 milliseconds, or a hundred milliseconds time period, you actually get so much CSI data. Much of the matrix and now you have data. You have the labels too. That is, you actually have some CSI, which is labeled as walking and some CSI as not walking. Then the whole thing gets transformed into a deep learning AI solution. No more just a signal processing problem.

 

Data Collected by Wifi Signal Engineers

 

The first couple of slides, I want to really talk about what type of data was collected by wifi signal engineers using AI models. As I mentioned to you, earlier RSS signals were being captured, but offline it's more of CSI signals. CS, CSI signals a lot more standard. I'll show you another slide where the phase and the amplitude information is a lot more consistent compared to RS signals. So right now that's what is happening. Towards the last slide, I think I'll talk about what is coming in the future. What seems to be stopping or are slowing down the research of signals using AI models is the type of data itself. I'll talk about it in the last slide.

 

Audience Member:

 

Question. Do you have the antennas? Are you assuming numbers in this model or?

 

SK Reddy:

 

In this model? Yes. My model in this case, I think 3 by 3 seems to be the most common commercially available, but I think they may be an unequal number. It's possible, but I think in this case what I have seen is all the papers are using the same number of transporters or receivers.

Audience Member:

 

Meaning at the access point or at the device?

 

SK Reddy:

 

Device at the access point.

 

Audience Member:

 

So transfer the receiver to the access point you're receiving on this device.

 

SK Reddy:

 

On your device, let's say your laptop or a cell phone.

 

Audience Member:

 

You don't have 3 antennas, right?

 

SK Reddy:

 

Well, what. If you use a laptop and I see a car with them, they actually have 3 antennas. That's what I think. I know.

 

Audience Member:

 

You're receiving a symmetrical system.

 

SK Reddy:

 

Yes. At this time. I have seen a couple of papers where I think a number of transmitter antennas and the receiver antennas are dissimilar. I think the few papers I've read are all talking about similar antennas in both situations.

 

From now I'll talk about, once we have the data, how to start processing it. How to use maybe machine learning models and deep learning models. I want to emphasize the point that the data collection is one of the most difficult problems in this case. I'll give you a couple of examples. The objects in the room, even if it's a lab setting, and if you want to only detect human walking, and there's nothing else obstructing the path. You also have the walls on the roofs on the floor. The reflected signal is getting impacted because of the reflection. Also, I think fading is happening. Even though the CSI signals are a lot more stable compared to RSS, CSI still gets affected a lot. Number 1.

 

Number two, noise is a huge factor. I will show you a couple of examples about how you can denoise off your signals and that is still a huge factor. People are trying to figure out how to do that. Number 3, the time duration of your data snap. Let's say if a person is walking. Of course, a person can walk for 5 minutes, 10 minutes, or 15 minutes, but let's say if you want to detect a human being sitting and typically sitting takes maybe 1 and a half to two seconds. Now, how to draw the line and sense when does the sitting start and when does the sitting end. There are data collection problems. Sometimes if you don't have the clocks of both the transporter and receiver synchronized, then you have a problem, there's a delay.

 

Even otherwise. If you have accurately labeled the signal about the start and end of an activity. Because the start, because an activity takes some time, 1 and a half to two seconds, if you define a model that. Okay, my data point is this is a CSI for sitting. It's difficult to find out what CSI signal you're taking. Because CSI happens over a period of 1 and a half seconds. You actually have multiple CSI signals coming in that time period. That is another complication and maybe it's a good thing too. I'll talk about the deep learning models. Which can take care of the temporal effects because if an activity can take not just in 1 timestamp. That is t is equal to not just t(0), but t is equal to t(0), t(1), t(2) and 3 and t(4). Then how do I take all these 4 timestamps as an input and make sense out of it. It can be done only in deep learning, but not in regular machine learning, anyway.

 

Standard Approach

 

Once you collect the data, the standard algorithm or standard approach being taken is you collect the data, you do a little bit of denoising and then do a feature extraction. Feature extraction takes place only if you're using machine learning. Let's say, if you're doing some sort of a classification model like a K-Nearest Neighbors, SVM, or any other models, you do feature extraction. Then you do the machine learning model on that. The advantage of using a deep learning model is you don't actually have to do any feature extraction. Model will figure it out on its own. That's why I think that the last two papers I'm going to talk about actually talks about the deep planning models where there's no feature extraction.

 

Denoising CSI Signals

 

Couple of papers very clearly talked about denoising. Again, if you do the research on signal processing, I think there are lots of techniques being mentioned about how to do denoising. There's lots of filters. You can pass the signal through. You can get some noise out. Butterworth loss. The low pass filter apparently is a lot more popular, but it's not as effective. Especially for the CSI signals. There are a couple of other filters being used.

 

The most popular denoising technique for signal processing is using PCA. PCA has been used in regular machine learning models too, but it's most effective in signal processing. You have the principle component analysis where you can identify the individual signals which form a part of the combined signal. You can actually delineate the individual components of these signals. Whichever signal is contributing or the noisy signal could be eliminated. I'll show an example of a. I'll come to it. So PCA, I think if you are developing a single processing model for deep planning or machine learning models, I think PCA would be one of the first things you would do.

 

This is an example of the amplitude and phase being sanitized. If you look into these signals here on the. This is just the reflecting amplitude part and this is the phase information. The accurate phase, phase change, or phase difference is not effectively captured. If you're not sanitized some sort of sanitizing has to happen. The carrier frequency offset and sampling frequency offset are two very technical signal processing terms. I don't have too much detailed information. Carriers frequency offset is a reflection of sometimes the way that product has been manufactured. Which enhance CFO and SFO are inherent noises available near your signal. You've got to figure out how to process it. If you process it and do a sanitization. You can see signals very clearly changing the phase or the amplitude to reflect the activity. What you're trying to predict.

 

The more distinct the data is, the better the model can figure out. That's what we are trying to arrive at in this case. If I had to compare this with an image processing model, which we have worked on earlier. You actually have an image where a person is standing or walking. The pixels in an image are changing. You actually have a very distinct number of pixel numbers available that you can tell the model. Okay, this is what the pixel intensity is when the person is walking or sitting. The model can immediately figure out. In this case, there's so much noise and hence you have to take the noise out.

 

Audience Member:

 

First question is the outcome made to figure out with PSI the range or the location?

 

SK Reddy:

 

The location of?

 

Audience Member:

 

The location of activity detection.

 

SK Reddy:

 

Yeah. Okay.

 

Audience Member:

 

Are you trying to figure out how far I am from the access point? Or are you trying to figure out where I am located relative to the access center?

 

LSTM

 

SK Reddy:

 

In the next few slides, I'll be talking about multiple efforts done in localization. In other 

words, to predict where the person is located. There are a couple of other papers which talk about what the person is doing. Human activity detection is a couple of other research too. Localization has been one of the earliest research being done. I think in 2012 there are a couple of papers I'll talk about. I'll come to that paper. I think I don't want to, but before I get into the deep planning models for single processing, I want to explain two fundamental building blocks of deep planning. If you have guys I've heard of LSTM, I think it'll be nice. If not, I can explain. There is a model called recurrent neural networks.

 

Recurrent neural network is a deep model, which takes data in a temporal fashion. That is if something has happened in a sequence and if there is an importance assigned to the sequence. Then this would be a fantastic model which actually can take the information one after the other. R and N, there are variations of R and Ns available. One variation is called LSTM. Long short-term memory. This has been hugely popular for the last 3 or 4 years in LSTM. If you go to some of my YouTube videos and blogs. I talk about LSTM being used in text processing for summarizing question answers on all of that stuff. There is one fundamental problem in R and N, which is addressed in LSTM. That's called vanishing gradients or exploding gradients. In other words, if the temporal information you're inputting is very long, then R and N starts forgetting the information that it heard first and only remembers what you fed in the recent past.

 

Let me give an example. If you want to, I'll give a non signal processing example and a text processing example. If you are developing a model which will do summarization of long text. If you want to feed the entire paragraph into the R and N after the 20th or the 30th word. R and N starts forgetting what was the first word? What is the second word? Then you have a model which is forgetting things and it's not very effective for you. You want the model to remember as much information as possible because you want to feed all the information for the model to make sense. From R and N came LSTM, where the model actually has some memory in it. According to research, R and N can remember a maximum of 10 words or 10 input data points in a temporal fashion.

 

The reason I'm talking about R and Ns in signal processing is if you are time stamped off the signal. Especially for human activity detection, if the person is sitting down and the person takes, let's say, 1 and a half seconds to sit down. If your timestamp is, let's say, a hundred milliseconds, you actually have 15 data points for that single activity of a person sitting. For all of those 15 data points will be different from each other. Hence, you want to feed all those 15 data points and tell the model, "okay, this is all together an activity called sitting cause there's no one single instance activity." From that sense, LSTMs need to be used. If you are into single processing research, especially in deep learning models, take it from me there's no single paper published yet on single processing or human de human activity detection using LSTMs. We are working on our company using LSTM. If you guys want to work on that, that'll be nice.

 

CNN

 

I won't tell you. Second is another deep learning concept or a model called CNN. CNN actually takes images and each if you look into the image of, let's say, the smallest size of 224 by 224 pixels, each pixel has a number given. Using CNN, you can run a filter on the image, do a lot of processing. I'm not explaining the whole convolution and sampling activities in a CNN model. CNN is another deep learning model, which has been used in one paper. I think it's not prevalent, but I think it's going to be used more if you convert the signal data into some sort of an image. You can feed that image into C models for the model to make sense. Even though the accuracy of human activity detection as of today published by papers in 2015, 16, and 17 is as high as 19, 94, 96%. Many times the accuracies are in a very restricted lab environment.

 

They have not been used in a public environment in a non-restricted environment. I think the limitation of the non deep learning models is the lack of deep planning, single processing models because of that the signal is not very consistent. This is one of the earliest couple of earliest papers in 2012. In both the situations they used RSS signals and the receiver signal strength was used to make predictions on the distance of this localization. In this case is this distance of the person, how far away the person's located from the receiver. That was a prediction done. They developed some sort of a database of if this is the signal strength with accuracy 60-70% plus or minus so many X number of meters is the distance the person's located. One of the earliest ones even though the accuracy was very high, they were using RSS. Not very effective.

 

Accuracy and Range

 

Audience Member:

 

What was the accuracy and how far was the range?

 

SK Reddy:

 

I don't recollect the exact accuracy in this. No, I can't recollect. The accuracy was, I think, very high. They were actually developing a database based on the signals received. In both situations, if you say you see, they used RSA signals. Nowadays people don't use RSA signals because they're not very consistent. That's the reason why I didn't.

 

Room Size

 

Audience Member:

 

Was it a room or was it a big hall?

 

SK Reddy:

 

All of these papers were in a lab setting of the smallest was 3 meters by 6 meters. The biggest was I think 14 meters by 11 meters lab. It's smaller than this room. Definitely for sure all the experiments still now. I don't think there were a couple of papers I did not share in the presentation today. There were a couple of presentations which talked about detecting making a machine learning model to detect the lighting arrangement in an indoor stadium. Of course, the stadium is massive, but there they were approximating lots of signals coming back as a reflector signal. I did not mention that in the presentation, but because the application of that research was not useful in other contexts. Whereas in a small lab environment, I found the repeatability of the activity was higher and also the accuracy was higher.

 

2014 this is the first time I think I've seen them using some sort of machine learning model. SVM (support vector machines) classifier. Except that being the difference, all other activities are very similar. They collected the data, they did some denoising that is increasing the quality of the data. Tried to identify anomalies that are local outlier factors because some signals were completely out of the norm. When they looked at the average numbers of signals coming in. They had to remove the outliers and then they did just the support vector. Machines support activations actually. It will give you a decent accuracy of whether the person is located X number of meters away or Y number of away meters. If you are having two classes. If you have multiple classes, then you can do that too.

 

Then you are restricted on the number of classes you have defined using SVM. That's one of the earliest papers. They actually had good accuracy using this model, but they were focusing only on the fall detection of a human being. When the person walks, of course, the signal varies in a certain way, but the person sits down sitting down versus falling down. Even though the signal looks very similar. But on a frequency domain signal, you see too much violent up and down going on in the signal when the person is falling down. They banked on that information of the signal. In other words, in this model, there's not much of a difference. From what I've mentioned earlier, except they used a SVM classifier.

 

E-Eyes 2015 Paper

 

2015, there's another one which is more of. They created a bunch of fingerprinting of human activities. Here they used both RSI signals, RSA signals and CSI signals, but they found CSI was a lot more effective. They were focusing on two different activities, washing dishes and talking on the phone. Whether the person is washing the dishes or talking on the phone. For them, they were using more of. I think they used what? They created histograms. As all the signals were converted into histograms and identify what type of histograms fall into either of the bins. If the two bins, washing dishes and talking on the phone, are two different bins, how effectively can I delineate the signals coming in and put them into multiple bun bins? This is the model they've used, but this is more of a paper I wanted to highlight. Not for the deep learning of the machine learning model approach, but I think they did a lot of cleaning up of the data. If you want to find out more techniques on how to clean up the data, especially signal denoising. I said this would be a good paper for you to read.

 

CARM

 

This is one of the earliest papers. Which in 2015, a good approach if you guys want to use it to track 4 different activities. I think sitting, standing, walking, and falling down. I guess 4 different activities. If you look into the, just the wave, the different types of waves for walking and falling and sitting. If you look into the walking, sorry, falling and sitting down. Look at this portion of how the wave is responding. Whereas this portion of the wave seems to be similar. In other words, the CSI numbers for walking, falling, and sitting down are similar except towards the later part. Because the person, when it starts falling the signal response is almost similar in both the situations. Still half the fall and when the real fall comes, I think the signal starts responding differently.

 

Amplitude

 

Audience Member:

 

The Amplitude in DB? What the?

 

SK Reddy:

 

The amplitude is in dB? Yeah, dB.

 

Audience Member:

 

So it's relative dB between is it relative or is it an absolute measure? The difference between, but because 20 dB is a big number, right? That's big.

 

SK Reddy:

 

Yeah. I wouldn't know. I don't know whether it's a relative dB or not, but I think I know it's a dB where they're collecting the amplitude. The amplitude changes off the signal over a period of time for the entire activity that's happening. I think it takes almost two seconds for the person to fall. For that, in that two seconds period, I think they're taking the amplitude. Whether it's relative or not, they are absolutely high.

 

Audience Member:

 

Because 3dB means we have a single loss. 20 dB means like 100,000 whatever. So it's a grave deep decline in. Okay?

 

SK Reddy:

 

Maybe. I'm not an expert in single processings. I would not know, but whether it's relative or absolute. I know it's dB.

 

Detecting Falls

 

Audience Member:

 

This is showing that. How would you know it is a fall? How would you detect that?

 

SK Reddy:

 

When you're training the model, you actually have a human being falling. Then you actually collect the CSI data. Once you collect the CSI data, you actually start using a classifier model to identify, "okay, which of this new signal that's coming in is similar to walking, falling, or sitting?" You do the comparison. When you. When in a typical machine learning model, you train the model, and then when the training is done, you actually infer the model. This is when you're training the model. You get the information and then convert it to CSI and do classification. Later.

 

FreeSense - 2016

 

2016, I think there's another famous paper where a person was walking up and down. Then trying to capture the amplitude. This is an example of a PCA. If you look into the same signal as you do PC analysis for 4 different components, you can see this particular signal is too violently shaking. Which is creating more noise. In this paper they took this PCA one component and used the rest of the components that added them up into a single signal. Then used that data. Yes, sir?

 

What Transmits and Receives Signal?

 

Audience Member:

 

Can I just ask, what transmits and receives these laptops, phones or?

 

SK Reddy:

 

The receivers are laptops. Many of the experiment transmitters are the regular routers. The commercially available routers.

One experiment, one paper. I remember they use even cell phones too. I think majority of the cases they're using are just the cell phone because laptops come with the NIC cards now, I think. NIC cards can easily capture the CSI signal. At this point of time, I don't think there are any mobile cards available for a cell phone where actually captures the CSI data. I'm not aware of that. Yes, sir?

 

Audience Member:

 

Kind of a related question. Is it possible to use match network work? Like multiple receivers and transmitters for more accuracy?

 

SK Reddy:

 

Many of the paper research people are doing. Even, I think, research, what I'm doing too is single receiver and receiver single transmitter with multiple receivers. If you have different types of signals coming in for the same activity, even though you get a lot more data. Which is actually good for deep planning models. Hence, because you're not doing any feature engineering or feature extraction. I think it may make sense. Until now, as I mentioned to you earlier, there is not enough research being used. Enough research using deep learning models.

 

The largest data set was 960 packets of data being sent to collect one activity information. That was using K and N. Human classification with K and N. I think maybe this is the paper. If we have a multiple set of receivers receiving the same package on different locations. Even if not right next to each other. A different location, you may get more data. I have not seen any published information. We are working on a similar case of a single receiver on multiple receivers. We are cultural, yet we are not finished here.

 

Audience Member:

 

960 packages per second?

 

SK Reddy:

 

It's 5 GHz. You actually have a... The frequency of your transmitter is 5 gigawatts. You have so many. I think 100 packages per second, but 960 packets for that entire activity. When you actually, let's say you're walking, you walk for that longer duration so that you get 960 packets of information for the walking. 960 packets, but you actually have 9 pairs of transported receiver information coming in. You actually have 30 subcarrier information for that, but 960 packages is what? For one in particular. The one single packet you get 30 times that is sub carriers. You actually do 960 times so that you get so much information. That's the largest data. I've seen people using not the typical deep learning models where people are using maybe hundreds of thousands or even a millions of data.

 

So that I've not seen. Maybe it goes back to his question too. If you have multiple receivers and transporters. Yes, you'll end up getting more data. Also many of these research. When they're doing the walking or the sitting, etc. They do, let's say, walk for 1 hour, maybe 5 hours, or 10 hours maximum. There's no walking information for, let's say, 100 hours. Because if it's a small lab setting, there won't be much of a variation in the signal. If it's from 5 hours to 50 hours, unless the lab setting or the room setting is wider. And if you have multiple types of people walking in, and hence are multiple activities being tracked. The largest number of activities being tracked in the paper was only 4 activities sitting, falling. I think a couple of other things.

 

Keyboard Detection

 

There was a paper published and also there's a YouTube video available for detecting the keyboard. Which key in a keyboard you're striking. The model actually figures out. They actually had a very good accuracy, but the problem is some keys, they were showing better accuracy. Because if the key is somewhere to the extreme far away or extreme left, if your hand and your fingers have to move. The model was a lot more accurate to figure out which key you were striking. If your hands are right in the middle of the keyboard and then you're striking at the key right beneath your finger. Then the model accuracy is very low there. It's actually a good paper if you guys want to read about keyboard detection.

 

Audience Member:

 

So are these the papers that you talked about?

 

SK Reddy:

 

Yes. These are all some of the papers. The reason. Yes, these are all papers. I can't talk about my research, but I'll be actually talking about some of the information about what we are doing too. Yeah. That's the answer. That's the link of paper, right? That's the link for the paper. Yes. Thank you. All my slides actually have links to the papers I'm getting the information from.

 

Do We Trade the Model For Every Setup?

 

Audience Member:

 

Is it true that you have to trade the model for every setup for every deployment?

 

SK Reddy:

 

Yes and no. Right now, the research is for one transmitter, one receiver setting with a small room. You can have multiple types of people walking, but if your classification or detection is only for those 3 or 4 human activities you can train so many people for so many hours. Let's say, if it's sitting, you can actually have a thousand times or maybe 5,000 times sitting. In all of this research is being shown only to prove the effectiveness of the wifi signals being used, AI models connect the wifi signals, and the AI models to actually make predictions using it for a non-lab setting. I think I've not seen, I've not heard. I know the reason why it's not happening. Because I think I'll talk in my last slide about some of the problems wifi signals have compared to, let's say, your audio signals or your video signals. This is slightly different. That's why I think maybe the research is not going on. Also, I'll talk about a couple of other innovations that's happening in the deep learning world and the image processing world. I don't know if you've heard of something called capsule networks. I think that also is going to come and change the way signal processing will happen going forward. The capsule networks were invented in October, 2017. 4-5 months ago.

 

DeepFi - 2017

 

Okay. This and there's one more paper I think was published in 2017. One of the first two papers where real deep learning was used. I want to show you. I was mentioning earlier about why CSI signals are better than RSS. If this is a standard deviation. If you only look at 10% variation, 60% of the signals in RSS will have only less than 10% of variation of standard deviations. Almost 90% of your CSI signals have less than 10% of your standard deviation variation. What does it mean? It means a lot more CSI signal data is stable enough compared to RSS data. That's one of the reasons why people don't use RSS signals. In this case it is very interesting even for me. I felt very unsure why restricted boltzmann machines RBMs were used as a machine learning model.

 

In this case, I have not heard of enough instances of RBMs used as a deep planning model in numeric data or text processing data. I was very surprised. I even sent an email to these guys and went, "what was the reason why you picked that model and not some of the model?" I didn't get a response yet. The entire process of data collection of denoising is very similar and they've used RBMs as part of the machine learning model. They got a very high accuracy response. In this case I think 95. All papers published are about 90, 94, 95% accuracy rate.

 

CiFi

 

This is the most recent. 1 more research paper. Which is most recent. I want to spend a little more time explaining this. This is the first paper. Maybe you'll find more papers coming in. Instead of using just the phase of or the amplitude of the signal. These authors found. Try to use change in phase. It's called the angle of arrival. If you take the same packet received by two antennas right next to each other. If whatever phase angle the first antenna receives and the second antenna receives, compute the phase difference. That angle of arrival information is a lot more stable and a lot more consistent for activities they were trying to measure. Even compared to the regular phase and amplitude of CSI information. Angle of arrival was information was captured. That's number one. Number two, imagine a vector where for every packet you have the 30 sub carriers. If you have 3 receiver antennas and if you want to compute a phase change or angle of arrival. It'll be between two antennas. If, let's say, antenna 1, 2, and 3, the angle of arrival for 1 and 2. If you capture the phase and do. You can deduce the angle of arrival between these two antennas.

 

Similarly between antenna 2 and 3. For every packet, you have 60 angles of arrival data points for the same packet. If they have 960 packets and if you have. If you break those into groups of 60, you have a matrix of 60 by 60. You have 16 such matrices. You want me to repeat that? You have for every packet. If there are 3 antennas coming in. You have antenna 1, 2, and 3. The angle of arrival between antenna 1 and 2, you get 30 numbers. 30 data points. Between antenna 2 and 3, you get another 30. You have 60 angles of arrival information per packet. If you make that into 1 single column in a vector and if you have 960 such packets and you club. You take a packet of 60 packets of information.

 

For each packet, there are 60 data points and you have 60 such packets. You have a matrix of 60 by 60 and you have 16 such images. Which is 60 times 16 is 960. You have a matrix of numbers. This is the beauty of this paper here. They're converting the signal into an image. Just a picture. The picture in this case is a picture size of 60 by 60. You're getting 16 images for the same activity. That is because it's the same package you're getting. No, sorry, same activity.

 

If you have an image of 60 by 60 and you have 16 such images. The rest of the process becomes similar to any convolution neural network model for image processing. You have an image which is 60 by 60 and depth 16. You run a filter of 5 by 5 and you run 32 filters on it. When you run on this you will get an image of depth 32 and the size 56 and 56. Then you do a subsampling layer. You have a convolution layer, C layer, and an S layer. For every convolution you have a subsampling layer. If you take this pair as 1 unit, you have 4 such units. You have a convolution layer, then subsampling layer, another convolution layer, another sub sub layer, and goes until 4 times. You have 4 such processing and then you connect it to a fully connected layer to make a prediction on what this input image is about.

 

If you have read CNN's for, let's say, classification. Lots of cats, a dog, a car, on an Airplane. This entire signal processing problem is converted into a CNN model. I am expecting a lot more papers. There's a lot of potential for this type of approach. Also, if you look into the accuracy of the model, you have the cumulative distribution function of sci-fi and compared to other models. Which were published earlier. DeepFI was the paper just now I mentioned. The accuracy rate is almost 80% for an error rate of around 2.5 meters. In other words. The model can predict with 80% accuracy with an error of 2.5 meters. If you don't mind an error of, let's say, 4 and a half meters. It can predict a hundred percent of the time where the person is located. There's a lot more research yet to happen to make the models correctly at this point of time. Whether it's amplitude only, phase only, phase difference, or a combined CSI. The research is not agreeing on a single factor. A lot of people are trying to predict what would be a good indicator of the angle of arrival. Like I said, face differences was 1 indicator used by 1 paper.

 

Because the signal is still getting affected by so many other environmental situations like temperature, objects, and obstruction either the models have to be fine tuned or the data that's coming in has to be fine tuned. The deep learning models at this time, I've seen people have not used LSTM yet. CNN I think, has used capsule networks. Which was announced in October, 2017 has a lot more potential. One benefit of a capsule network is traditional image processing models always need maybe hundreds of thousands of data points. Even millions of data points. Capsule networks need, in best case scenarios, a few dozen. In a worst case scenario or maybe a few hundreds. If you are. If you have even less data and use the effectiveness of the capsule network. I think you may find better models coming up.

 

Generated Adverse Serial Networks

 

That's what I'm waiting for. Generated adverse serial networks is another approach being used where you can use. You can have one model generate the data. What you're looking for and feed that data into another similar model. Which actually starts getting better and better. If you are creating a human activity detection model for signal processing, then you can have one model which actually creates more CSI data. More and more accurate data. Start feeding that CSI data to the model. You don't need as much data going forward. Reinforced learning is another hugely popular topic in deep learning models. You might have heard of the game of Go which was learned in a few hours and it beat the world record holder. I think similar oral models can be used for single processing too.

 

Fundamental Problems With CSI Data and Human Activity

 

Two fundamental problems I've heard is that even though people are trying to figure out the connection between the CSI data and human activity, there is no absolute indication that CSI data indeed is a reflection of human activity. In other words, if you take the lab environment out and put new people in or you expand the lab to a bigger room or a different room, there is no indication that the model is going to work. So until now, all the papers published are only shown in the lab environments. Sometimes right now the models are only okay sitting, walking, and standing. What if you have a room full of people like you and each one is doing different things? If you want to have a model to figure out what each person is doing. A person is breathing or not. Someone is checking the email or cell phone. Someone is talking, someone is sleeping. I think it's a lot more complicated. I think signal processing is in a very early stage of development. I think it'll take more than two years to come before people can start seeing models figuring out human activities in a complex environment using signals. Yes, sir?

 

Other Applications

 

Audience Member:

 

What other applications other than giving behavior have you tried to integrate in mobile phone/wifi? Something else?

 

SK Reddy:

 

Good question. I think that's the focus. The answer is, I don't know. I don't know what other activities you can use, but there's a lot more. There's a lot of interest in figuring out what human beings are doing and using it for the good of the human being. Without getting into the problems the image processing is getting into privacy. Maybe lighting and the line of site issues. For example, healthcare scenarios. If you want to have patients being tracked, but you don't have humans, so many trained doctors available. Let's say you're in a childcare situation where the kid is playing on his own, but you want to have a security concern. Let's say in an Airport, you want to track people certain untoward incidents to be tracked. The AI models, the situations where AI models are being used using audio, text, and image processing, all are being focused on human beings because you want to alleviate the problems of human beings, I guess. That's what I think the signal processing is also going towards. Other than that I cannot think of any other situation where you can use this signal processing right now. I think all AI is being used to see how we can make human life better. That's what I guess, yeah. I don't know the answer. That's all I had for today. If you have any questions, I'll be happy to answer them.

 

Focus of Research

 

Audience Member:

 

What does your research focus on?

 

SK Reddy:

 

Sorry?

 

We are focusing on collecting this data and using LSTMs. If you've seen I did not find a single paper using LSTM because in signal processing, the activity always is a temporal activity. It actually always takes some time. In deep learning models, I think LSTM would be a. Even CNN is not a temporary thing. You can actually have an image at any instance given, feed into a CNN, and make sense out of it. LSTMs is one of the few. Of course, GRU's another model of one of the few, which actually can process temporal information. If I were to give an example of a temporal information if a text information could be the cat sat on the mat. The mat sat on the cat. The cat mat sat on the even though all 3 examples are the same words. The first sentence has a certain meaning, the second sentence has another meaning. That sequence and the information that is captured in the sequence will be appropriately captured in LSTM. I think there is a lot of a temporal aspect and signal processing, which is not being captured. That's what we are focusing on.

 

Preserving Optimality

 

Audience Member:

 

Maybe this is a little naive. I don't think that there's necessarily a correlation between CSI and human behavior because that's a measure of trying to get information across from one antenna to another. You're just kind of hijacking and taking advantage of it. In your optimal world, if you could place some sort of tweak on the way that the information is actually being sent, but you still preserve optimality within some reasonable window of data transfer to detect human behavior. What would you do?

 

SK Reddy:

 

I don't know. What would I do to get them to change?

 

Audience Member:

 

To change the output signal to still be able to transmit data efficiently. But yeah.

 

SK Reddy:

 

Let me give you a long answer for that. There is a famous quotation in statistics. It's called causation. Correlation is not causation. Just because an activity is correlated in another activity is not causing it. Correlation would be a good area of study because if the correlation is very consistent, then you will be absolutely sure. Okay, this is what is causing it. Until now, based on the research, they are able to find some correlation. There's not enough research to prove that this correlation works absolutely in every situation. For example, if the signal moves violently towards the last few milliseconds, up and down. The amplitude and the phase is X and Y, let's say, for falling. If that is what is exactly happening for when the person is falling down, that's what is impacting the CSI. Then the model, we can be absolutely sure that there's a model. Now it works for you. Even though I did not do research on your falling or your falling or your falling. The model will figure out exactly when the patient A and B have fallen. Even though he never saw patients A and B.

 

That correlation is because in signal processing, engineering people have not figured out the correlation. Now, if your question is what would I do to find the optimum correlation?

Audience Member:

 

No, my question was what would you do to change the wifi signal behavior itself to make it more. I don't know. To pick up on signals better for human activity?

 

SK Reddy:

 

Good question. I don't think so. I don't know the answer. I don't think I know what I'll be doing. If I look into my AI journey where we are trying to solve problems using data and numeric or text or image on all my research and all. I think the whole AA world's research is going on. Okay, I have what I have. I have the data whether it's a text or image. Can I find consistent correlations in such a way that I can define a model? That if I see this is what it means. Even though that's not what exactly is happening in the background. For a text, let's say if you have a language or an image. Because the signal that's coming in the data that's coming is a lot more reliable for a text because you actually can see the text.

 

If it's an image. You can see the image. We don't know how much noise is coming in. People, I think, have figured out some noise. We don't know what else is there in the signal and they're using that. You have the authentic real information plus some noise being used to make a developer model. Hence, that's why sometimes the accuracy rates are varying. What would I do to change the data? I don't know. I don't think anyone has actually answered. Good question. Maybe I should put it in there.

 

Audience Member:

 

You could put in multiple wifi signals. One other different frequency and then compare. That would tell you how they differentiate mm-hmm. I don't know if you have read about radar. Their sensors do that. You can use a continuous wave radar. You can use a pulse red, which should tell you the difference in the single.

 

SK Reddy:

 

Okay. Thank you.

 

Distinguishing Between Objects

 

Audience Member:

 

Is there anything unique about the human body that makes this detection possible? Can you distinguish between a large book from a large object falling off of a shelf?

 

SK Reddy:

 

Good question. That is one of the unanswered questions in the signal processing for the AI world. If a person of, let's say, a slim person versus a large person is walking. Signals are not similar in some situations for walking. I think they're not similar. Sometimes they are dissimilar. In other words, the signal variation is not consistent to show. Okay, for a tall person, for a short person, it's different. For example, if a short person is walking rather than sitting. Let me change it. Let's say a short person is walking versus a tall person sitting model, both the signals will look similar, right? Some of the examples there. What in the human being is making it distinct that people have not figured out that that's one of the problems. What is that in the human being that's making the signal distinct?

 

Then you'll be able to figure it out. Okay, for walking, this is what is impacting the signal. For setting, this is what is impacting. Right now we are just in all the research engineers, AI engineers like me. We just collect the data and we label it. Okay, this is the signal I got when the person stood and this is a signal I got. The person fell down and said, "can I see if this falling signal is consistent all the time and how many times it's consistent. Can I develop a model out of it? That's what has been the approach till now.

 

Testing

 

Audience Member:

 

Not driving human volume, just trying to receiver location. The frequencies are low, bad frequencies. You're not able to. The testing that you've done is or results you're showing are on devices, not on human bodies, right?

 

SK Reddy:

 

The testing is of the signal received as the C receiver after being reflected by a human body. Let's say, if you're typing, if you're washing dishes, if you're talking, if you fall down, or even if you're breathing signals are a reflection from the human body. Then they're trying to look at the CSI value of those reflected signals.

Audience Member:

 

How they're doing that. Cause you're using a laptop or something to record. How do you know it's a reflection, a multipath, or a body?

 

SK Reddy:

 

They're averaging. Yes, you're right. That's the problem. They're averaging the CSI values or the signals they're getting from multiple path multipath impact is one of the biggest problems for the signals. Yeah. That's a fading multipath. These are some of the issues which compared to, as I mentioned to you earlier, image processing where you actually get an image and that's exactly what you are looking for. Because you can actually compare the image with the real situation.

 

Wifi Cameras With Video

 

Audience Member:

 

Now, if you have wifi cameras that have high quality video. This problem does not exist. You can just take the video and you know what's going on.

 

SK Reddy:

 

Well, but then it goes back to my original problem. I said video is a privacy issue. Yeah.

 

Audience Member:

 

But in a hospital where the patient cares, you already have the video feed.

 

SK Reddy:

 

Then we are getting into a non-AI discussion. Whereas more of an ethical, moral, all this. Even in a healthcare situation, let it's dark like the patient is sleeping. It's nice you can't have the model. If the patient, let's say if the line of site is another issue too because of the camera. Right? If you can't see the camera, I cannot see it. Cannot predict. Whereas wifi signals.

 

Audience Member:

I think you can combine the camera feed to get a better estimation.

 

SK Reddy:

 

You're right. That's a good point. In the lab situations for labeling purposes, right? People are actually taking a video. Yeah. You either have a human being detect okay, what time the person is sitting. Yep. Or you actually have a video camera watching when the person has sat down, figure out the timestamp and then use the timestamp to get the signal for that purpose. Lab experiment purposes, you can do it, but in real situations there are distinct advantages using wifi signals compared to video. Yes. There's a lot more research that has happened in video and hence maybe I think the accuracy and reliability of the video is higher, but I guess there's a lot more potential for the wifi signals to go forward. Yeah. And we are in such an early stage, some of the problems, why had they not done it?

 

We don't know. But having said that, of all the published papers, you never know what did not work for them. They published papers only on what worked for them. And hence we don't know when we want to do experiments. Let's see if I want to do an experiment. If you are a researcher, if you have published a paper, I would not know what did not work for you so that I don't repeat the same problems. But that's what happens. Of course some collaboration can happen and people will figure it out, but the paper only talks about what worked.

 

Audience Member:

 

You said there was like a resolution issue, right? When you do image processing if you have high quality video, like you said. You can see everything in great detail, right? But like the resulting image that you get from the wifi is that high resolution too? It's not high resolution?

 

Comparable to video

 

SK Reddy:

 

At this point of time. The resolution itself is not a major factor in image processing. Okay. I'll show you an example.

 

In 2016, in the image processing world, the average accuracy of an image processing model is better than human beings. Human beings make an error of approximately 5.6 per hundred. In other words, if you give a hundred pictures of dogs, different shapes, colors and stuff. 5% of the time you may not be able to predict, but the model will make only an error of 3.6% of the time. What does that mean? What can it be more than human beings? If I can predict? How can the model predict better? The answer is human beings cannot predict accurately if the image is blurred. Whereas the model can actually see. Even if you are naked, I cannot see, but the model can see what is in the picture and reconstruct. Next time when you send your pictures, if you don't want other people to see and you think, "oh, I have blurred it and I'm sending it." Think twice because the mission can see it. If it falls in the wrong hands. The model can immediately figure out what pictures they are.

 

I have seen image processing experiments with 224 by 224 pixel size and thousand by thousand pixels. There is 10 megapixels versus this less than a megapixel size two in both. The situation model is extremely intelligent to figure out what's happening. Of course you cannot make a small slide by 10 pixels by 10 pixels. Maybe I've not not seen experiments at that size, but the smallest is 224 by 224 pixel size. So models can. The pixel size or the size of the image is not a differentiating factor anymore for the mold to be accurate. As long as you're getting the data correctly in signal processing, even though you convert the signal number into images. Those numbers are not consistent enough. For example, if you give an image processing a picture of a dog and if you have a hundred different dogs. All are definitely dogs and you tell the model, "this is a dog." Then the model can figure it out.

 

When you give signals of a person sitting. Let's say a hundred examples of a person sitting, but the signal you're capturing may not be an exact reflection of sitting because you're also making noise to it. That's where the problem is. We don't know how much more noise there is that. We don't know if the signal is coming. Let's say, human reflection plus the wall reflected from the wall, reflected from the human being and you are averaging out all this information so you're actually bringing in yourself some noise. That's where the wall problem is.

 

Audience Member:

 

Appreciate it, thank you.

 

SK Reddy:

 

Do we have any more questions? Thank you guys. Thank you very much. I appreciate it. Thank you.