WEBVTT

1
00:00:02.450 --> 00:00:06.080
Radha Jitendra Rathod: No, no! 2 of my laps backwards.

2
00:02:50.635 --> 00:02:51.745
jsnelso: Hey Angelo!

3
00:02:52.300 --> 00:02:53.315
Angelo Dragone (SLAC): Hey? How's it going.

4
00:02:53.570 --> 00:02:54.609
jsnelso: Hi! How are you?

5
00:02:56.710 --> 00:02:58.300
jsnelso: Kind of tired.

6
00:02:58.300 --> 00:02:59.279
Angelo Dragone (SLAC): Yeah, me, too.

7
00:03:01.690 --> 00:03:03.748
jsnelso: And it was Thursday night.

8
00:03:04.550 --> 00:03:05.005
Angelo Dragone (SLAC): Yeah.

9
00:03:07.750 --> 00:03:08.430
jsnelso: Oh!

10
00:03:09.740 --> 00:03:11.810
Angelo Dragone (SLAC): Lots of going on these days.

11
00:03:12.370 --> 00:03:13.380
jsnelso: Sure is

12
00:03:17.090 --> 00:03:20.519
jsnelso: still don't know a lot. Don't know enough, either.

13
00:03:26.210 --> 00:03:28.519
Angelo Dragone (SLAC): Alright, there's people joining.

14
00:03:34.160 --> 00:03:41.520
Angelo Dragone (SLAC): I missed your your talk last week hopefully, hopefully, there will be. There's a recording that I can.

15
00:03:41.937 --> 00:03:45.273
jsnelso: Yeah. There was some good chatter about the

16
00:03:45.690 --> 00:03:46.240
Angelo Dragone (SLAC): 500.

17
00:03:46.240 --> 00:03:50.930
jsnelso: Characterization platforms seem to be, at least for the folks that were on the call. So.

18
00:03:54.480 --> 00:04:00.730
Angelo Dragone (SLAC): Yeah, that- that- that certainly probably could be interesting for our project as well. In particular. One of the thrusts.

19
00:04:01.590 --> 00:04:02.810
Angelo Dragone (SLAC): We'll look at that.

20
00:04:03.813 --> 00:04:04.386
Angelo Dragone (SLAC): Okay.

21
00:04:13.020 --> 00:04:15.039
Angelo Dragone (SLAC): let's see how many people show up. Is this

22
00:04:15.500 --> 00:04:19.460
Angelo Dragone (SLAC): meeting? Just a couple of days before a big holiday break.

23
00:04:20.293 --> 00:04:26.079
jsnelso: You know, probably people taking off early tomorrow. I bet.

24
00:04:33.370 --> 00:04:34.070
Angelo Dragone (SLAC): Have

25
00:04:43.690 --> 00:04:46.460
Angelo Dragone (SLAC): probably share my slides a minute time, so.

26
00:04:46.860 --> 00:04:47.902
jsnelso: Yeah. No worries.

27
00:04:54.600 --> 00:04:56.830
Angelo Dragone (SLAC): Should be seeing something.

28
00:04:57.130 --> 00:04:57.780
jsnelso: Yep.

29
00:05:02.390 --> 00:05:07.369
Angelo Dragone (SLAC): Let's see, and of course I swap.

30
00:05:08.970 --> 00:05:10.359
Angelo Dragone (SLAC): That's probably better.

31
00:05:11.650 --> 00:05:12.499
jsnelso: Oh, there you go!

32
00:06:05.490 --> 00:06:09.449
Angelo Dragone (SLAC): We'll be waiting a couple of units.

33
00:06:10.340 --> 00:06:12.030
Angelo Dragone (SLAC): So see people joining.

34
00:06:12.710 --> 00:06:14.759
Angelo Dragone (SLAC): I don't know if Paul is gonna join today.

35
00:06:17.110 --> 00:06:22.389
Angelo Dragone (SLAC): He's been out for a for a little bit. But he was sending emails to Facebook.

36
00:07:15.870 --> 00:07:18.779
Angelo Dragone (SLAC): or maybe we should start since I.

37
00:07:18.930 --> 00:07:22.110
Angelo Dragone (SLAC): We put together about 30 slides and so on.

38
00:07:24.890 --> 00:07:27.100
Angelo Dragone (SLAC): Don't want a separate group of time.

39
00:07:29.730 --> 00:07:31.580
Angelo Dragone (SLAC): How's that? How's that sound?

40
00:07:32.664 --> 00:07:33.130
jsnelso: Good.

41
00:07:33.450 --> 00:07:34.090
Angelo Dragone (SLAC): All right.

42
00:07:34.740 --> 00:08:02.829
Angelo Dragone (SLAC): Okay? So today, we're gonna talk about another project within this, a project of Meercut. And it's the it's the day for our ace, which stands for adaptive ultra fast energy, efficient, intelligent sensing technologies. I'm gonna give you this talk, together with some of the copy eyes on this on this proposal in particular, from slack

43
00:08:03.383 --> 00:08:10.930
Angelo Dragone (SLAC): and we'll tell you a little bit about this project 1st High Level. Of course we we started very recently.

44
00:08:10.980 --> 00:08:39.949
Angelo Dragone (SLAC): Actually, today, I think, is the we start for the second quarter of this of this, of the activities of this particular project. So there's not many results that we can show, but certainly we can talk about some of the preliminary work that was done, and that we plan to expand. During the during the course of this project, as you can see from from these slides there's a quite large collaboration. We have 6 representative of 6

45
00:08:40.430 --> 00:08:50.999
Angelo Dragone (SLAC): National Laboratory, plus 2 universities, Stanford and a Y, and I also, maybe I want to stress. And this would be clear

46
00:08:51.502 --> 00:09:13.629
Angelo Dragone (SLAC): in the next slide, so that many of the participants from the laboratory are people working in the detective departments of these facilities. And they are basically working on X-ray detectors for light sources. This will be, will explain why, as a

47
00:09:13.640 --> 00:09:41.511
Angelo Dragone (SLAC): application case, we choose X-ray detector, but that that will we will try not to be obviously very focused on that specific aspect. I also want to mention, and I saw that this joint today, Scott evens is from Gnova is our industrial advisor, so people from will meet him as he would probably join as to represent this project. Our

48
00:09:41.940 --> 00:09:51.920
Angelo Dragone (SLAC): Industry Advisory board which has not been formed officially yet. But I really want to thank Scott for for joining this project.

49
00:09:52.230 --> 00:10:13.129
Angelo Dragone (SLAC): So let me start with something that should not be absolutely controversial just to set the stage. As you all know, sensitive technology are really becoming ubiquitous and driving efficiency and enabling innovation in multiple sectors. There is a nice, AI generated picture on these slides that show some of the

50
00:10:13.130 --> 00:10:35.290
Angelo Dragone (SLAC): sectors in which sensors are actually used quite a bit for monitoring activities, and maybe I should say that this is not single sensors. Usually when the configuration of which sensor is used is a network of sensors. So information is gathered from multiple sensors, and that process for that process collectively.

51
00:10:35.680 --> 00:10:48.049
Angelo Dragone (SLAC): One of the characteristic, of course, of this is that either for the sheer number of sensor in this network or for the increase

52
00:10:49.680 --> 00:11:17.699
Angelo Dragone (SLAC): precision that these sensors need to have in specific sectors. The amount of data that is generated is actually quite large and also generated by speed, because usually this network are starting, operating a very, a much larger speed than they used to operate in the past. So this create a problem that is, was recognized as one of the critical problem problem in microelectronics in the Src.

53
00:11:17.700 --> 00:11:32.219
Angelo Dragone (SLAC): Few years ago, which was called Data Deluge. And it's something that we should try to solve, because, of course, it creates problems in terms of

54
00:11:32.510 --> 00:11:52.895
Angelo Dragone (SLAC): communication bandwidth the problems of memory and storage of of this data. And at the same time with this also, we get the increase in power consumption. So these are all aspects that an optimized architecture processing network should be able to to solve. And

55
00:11:53.320 --> 00:12:17.799
Angelo Dragone (SLAC): with these slides, I'm gonna show that now this beside the sectors that are more let's say applicable to probably to society. Also, this problem is present in experiment for scientific for scientific mission. And here on these slides and show you some of the mission areas which slack is involved, and some of the

56
00:12:17.800 --> 00:12:43.290
Angelo Dragone (SLAC): of the experiment that we have. But with the data rates that are involved in the data acquisition from the from signals from this particular application, and also the total amount of data that needs to be stored. This is quite large. Just to give you something more quantitative that they tell you how much data we actually need to store.

57
00:12:43.290 --> 00:12:54.739
Angelo Dragone (SLAC): We, we can take a couple of examples. For instance, a 16 megapixel megas detector, which are things that are planned for a Cls. Our photo size, facility

58
00:12:54.850 --> 00:13:09.560
Angelo Dragone (SLAC): slot would produce about 40 TB per second of data, and considering the amount of time that let's say, less of the year, that would be equivalent to store about 1.2 set of bytes of data. That's only one.

59
00:13:10.063 --> 00:13:39.579
Angelo Dragone (SLAC): If we take as an example instead a I, the future upgrade of a will produce about 10 petabytes per second per second of the data that is seen up to the Internet traffic of the entire North America, and that if would have been if people would store completely the data that doesn't happen in this kind of application that will produce about 200 of data per week

60
00:13:39.700 --> 00:13:55.699
Angelo Dragone (SLAC): in comparison, the entire IoT network nowadays is about 30 million connected devices and generates about 100. So this, this numbers are huge and we need to do something. And we can do something. Because if we look at the the raw data, there is a

61
00:13:55.700 --> 00:14:16.609
Angelo Dragone (SLAC): definitely a large mismatch between the data production rate and the information rate. So once we start processing this data, the amount of and extract well, what is the interesting information that quantity, that volume of data, becomes much, much smaller.

62
00:14:16.700 --> 00:14:20.370
Angelo Dragone (SLAC): So can we do this in real time? That's the big, the biggest question.

63
00:14:20.630 --> 00:14:24.029
Angelo Dragone (SLAC): So these slides, I'm showing.

64
00:14:24.410 --> 00:14:40.529
Angelo Dragone (SLAC): try to take a go to a higher abstraction level, and if we look in general sense of networks, you know we can. We can divide them in for sections. One is the perception layer 4 layer. One is the perception layer. Where we basically do.

65
00:14:40.880 --> 00:14:57.719
Angelo Dragone (SLAC): we acquire the data. We do the acquisition of the signals from from sensors. Then we have a connectivity layer where we transmit the data through the network. And then we process this data. And then we do the final data analysis that is application, specific

66
00:14:57.950 --> 00:15:21.579
Angelo Dragone (SLAC): usually innovations in between these these layers. And if we would have to somehow identify what are the key area research area that would allow us to optimize this sensing, because I think we can say that there's a component. This is related to new materials and devices for sensing. That's a component. Of course, in the micro electronics architecture so

67
00:15:21.580 --> 00:15:30.580
Angelo Dragone (SLAC): front end electronics and then processing electronics that that can be introducing inside the nodes of this architecture

68
00:15:30.580 --> 00:15:40.860
Angelo Dragone (SLAC): to- to either acquire or process the data. There's innovation in the way we interconnect

69
00:15:41.370 --> 00:15:58.178
Angelo Dragone (SLAC): the values knows of a circle of a network which is basically, of course, an area of communication, and sometimes that also can relate to packaging, but maybe less important compared to other kind of application. And of course, there's a problem of

70
00:15:59.700 --> 00:16:10.970
Angelo Dragone (SLAC): how we process the information within the architecture. And then, of course, if we have multiple nodes that already tells you that if you want to push the the

71
00:16:10.970 --> 00:16:20.950
Angelo Dragone (SLAC): the, the processing towards the sensor. We probably have to distribute the computing across multiple layers within within a network of sensors.

72
00:16:20.950 --> 00:16:41.199
Angelo Dragone (SLAC): and that probably that computing is also heterogeneous in the sense that there will be different kind of different modalities of computing that can be implemented at each one of these levels. So the way we organize the network is very important. And so that in itself constitutes another area in which we can introduce innovation.

73
00:16:41.310 --> 00:16:52.689
Angelo Dragone (SLAC): Finally, of course, we wanna try to optimize the or reduce the gap between data production and information and abstraction.

74
00:16:52.690 --> 00:17:15.260
Angelo Dragone (SLAC): And then, of course, it's a problem that needs to increase the efficiency in terms of energy consumption of this network. So within our race, of course, we don't want to study everything, because that's a very broad set of areas of research. And what we want to do is to try to develop underpinning microeconomics, technology specifically for ultra fast energy, efficient

75
00:17:15.260 --> 00:17:31.329
Angelo Dragone (SLAC): dense network of sensor. This is the case, for instance, for large area images, and all, pretty much all the detectors that that we use in facility and experiment in our laboratories and try to

76
00:17:31.340 --> 00:17:49.519
Angelo Dragone (SLAC): mitigate or reduce the the extreme data problem that we face nowadays. So we want to do this tackling the problem for a from a 4 different aspects. 1st thing that we want to try to do is to reimagine the network of sensorism.

77
00:17:49.560 --> 00:18:12.329
Angelo Dragone (SLAC): data driven adaptive, intelligent architecture. So I will explain what these terms means in more details. In the next slides we want to introduce optimally distributed computing at the data source we use usually call this edge. Some people have a slightly different meaning for that that work. But in our case what we mean is really push computation

78
00:18:12.783 --> 00:18:34.570
Angelo Dragone (SLAC): at the sensing nodes. And then, with all the layers above within the network and architecture. We wanna leverage this in current architecture for for this kind of networks the communication is actually one way from the nodes the the sensing nodes down up to the

79
00:18:34.610 --> 00:18:53.820
Angelo Dragone (SLAC): data acquisition system. And of course, once we do this and define these some architectures, we want to study a methodology for benchmarking them in terms of energy and information, fraction, extraction, efficiency. So these are the 4 key elements

80
00:18:54.290 --> 00:19:16.499
Angelo Dragone (SLAC): that will drive our research plan. And of course, we want to start from something that that is familiar to us. And so we we have decided to declare for this project as a application case study, X-ray detector systems for ultrafast, intravied high energy

81
00:19:17.081 --> 00:19:24.640
Angelo Dragone (SLAC): light sources like is the case of a Cls or upgrades, of course.

82
00:19:24.720 --> 00:19:41.159
Angelo Dragone (SLAC): and the other element that we want to do is that we wanna make sure that the architecture that we will study are compatible with high energies and also in extreme environment, because that's the case of future experiments at this kind of facility.

83
00:19:41.270 --> 00:20:05.210
Angelo Dragone (SLAC): Now, of course, we want to do this in a way that is also applicable to other fields, and which means that the technology where we will develop should be with some adaptation portable to other fields, for instance, fusion or biophysics, but more broadly to other scientific industrial fields. Scientific.

84
00:20:06.293 --> 00:20:22.746
Angelo Dragone (SLAC): Okay, so let me explain a little bit. Expand a little bit more on those particular 4 aspects. And let me start with this slide. So that shows the typical architecture of an imager for scientific application.

85
00:20:23.560 --> 00:20:47.079
Angelo Dragone (SLAC): other detector are maybe different on this, but the components are very similar. So you can start from the very top. Here you can see that there is a focal plane, for of the imager, and this is usually a tile focal plane composed of sensors. Underneath each one of them there is a front end circuit in particular, with the multiple for each sensor.

86
00:20:47.080 --> 00:21:12.879
Angelo Dragone (SLAC): What drives the dimension of the of these tiles is the maximum area we can actually build nowadays front end chips, which is a theoretical size of commercial technologies, usually on the order of 2 by 2 cm. So we can take a sensor and connect it to multiple asics and then create a tile and then tile this focal plane into a larger area.

87
00:21:13.294 --> 00:21:29.469
Angelo Dragone (SLAC): We use we try to use maximum area for these devices because we want to minimize gap. So that's an optimization. So now you may see that we have a 1st element, that process signal underneath these sensors.

88
00:21:29.470 --> 00:21:46.299
Angelo Dragone (SLAC): then usually there are one or a couple of layers of Fpga, so that take data from a subset of tiles, they format the the data and then aggregate basically the data from this connected sensors.

89
00:21:46.300 --> 00:22:09.169
Angelo Dragone (SLAC): They format them, and then they send them upstream to the data acquisition system. There is no communication between any of these tiles, especially not within any of these Asics, not between the fpgas within each one of the layer above, and so on. So the communication flows directly from each one of the sensor down to the acquisition system.

90
00:22:09.540 --> 00:22:31.609
Angelo Dragone (SLAC): And so nowadays we don't. We don't do pretty much any processing in the front end basic sense. We just do this analog signal conditioning. And then a conversion from analog to digital. Then we send that information to the that again, don't do any processing. And then until the data arrive to the data system.

91
00:22:31.800 --> 00:22:53.839
Angelo Dragone (SLAC): there's basically no processing. All that is done, all the intelligence now is in the data system that does the calibration of the detector and then converts basically electrical data or digital data represented on electrical signal into physical quantities and then performs the various analysis workflow dedicated on each one of them.

92
00:22:54.420 --> 00:23:05.929
Angelo Dragone (SLAC): So we can do something a little bit more powerful if we start pushing some of the computation within this architecture. And this is something that we have done.

93
00:23:05.950 --> 00:23:14.669
Angelo Dragone (SLAC): And what we have done is we produce some static processing capability. You know, wanna stress the attention of this work and try to explain

94
00:23:14.670 --> 00:23:37.000
Angelo Dragone (SLAC): what that means. So we can imagine that now the various layers that we have within our architecture, either the front end basics or the Fpga level behind it, or we can introduce additional. As I was saying, maybe a form of Fpga in between. Let's say what was originally detector and the data system.

95
00:23:37.430 --> 00:24:05.470
Angelo Dragone (SLAC): We can do some process, some stuff. We can start doing, some processing of the information coming out from from the sensors. And here I want to maybe point out that the versatility and the and the computation that we that we can do varies depending on the level of where we produce processing capabilities. If you think about it, the data system, of course, as

96
00:24:05.530 --> 00:24:33.649
Angelo Dragone (SLAC): view of the entire data coming from the entire focal plane. So this is the place where you can do most. Of course, most of the analysis and the most complex analysis. As you start going down into your architecture, each element that will see a little portion of your focal plane, and has no knowledge of what happens in the neighboring sections, and that goes down, of course, to the basics. Let's see, there is no portion of that

97
00:24:33.650 --> 00:24:47.379
Angelo Dragone (SLAC): of your focal plane, and so very small portion information of of sensors, and between the sensors in each one of of this, let's say, attached to to say basic, there's also no communication.

98
00:24:47.500 --> 00:25:12.170
Angelo Dragone (SLAC): And so the per, the the processing capability gets reduced as we push them down into the change over the sensor, and in addition to that, also, the the processing architectures that we can introduce in the elements becomes more and more resource limited, because, of course, the

99
00:25:12.230 --> 00:25:22.520
Angelo Dragone (SLAC): computational power that can produce into these elements become smaller and smaller as they as we reach the sensor itself.

100
00:25:23.121 --> 00:25:39.369
Angelo Dragone (SLAC): And so the only things that we managed to do in the past are things that are quite simple. For instance, putting a threshold on on each one of the pixels and saying, Okay, is there? Is there at all information? Did you get any signal or not.

101
00:25:39.370 --> 00:26:02.725
Angelo Dragone (SLAC): and that works very well for for experiments, for instance, that are sparse where data arrives and are sparse in nature. So only if you can read out only the piece of that information. So you you gain a lot. So you can actually go much faster. And this is an example of something that

102
00:26:03.250 --> 00:26:16.067
Angelo Dragone (SLAC): we've built here at slot. On the other hand, this processing capability is really designed for worst case scenarios and so that means that

103
00:26:16.740 --> 00:26:28.770
Angelo Dragone (SLAC): For instance, when you talk about specification, you can tolerate at the maximum rate of application. Cls, for instance, which is one megas, a maximum of pupil 1%

104
00:26:29.643 --> 00:26:35.182
Angelo Dragone (SLAC): so can we do better than that? Well, the

105
00:26:36.440 --> 00:27:00.890
Angelo Dragone (SLAC): point that I always once tries to make is that if we move away from this hardware driven architecture, where everything is sized to handle the worst case scenario for the data, and we start to start creating an architecture where all the processing capability have not started anymore, but are dynamic and gets adjusted, depending on the kind of data we can do much better. And this is basically what we want to start.

106
00:27:01.110 --> 00:27:29.269
Angelo Dragone (SLAC): And if we we can imagine some of the things that we could potentially study during this this project, and we have started making the list. And of course we will go through this list and make sure that it makes sense for the various workflows that we want to analyze. And so there's a work that needs to happen between the computational function that we will introduce. And the

107
00:27:29.972 --> 00:27:55.099
Angelo Dragone (SLAC): analysis workflows that for which they will be utilized. But just to give you some example, one could imagine that nowadays, when we size our front end. We we optimize, for instance, the gain over the 1st really 1st amplifier to get the the maximum signal that that our a certain maximum signal. That is

108
00:27:55.140 --> 00:28:17.949
Angelo Dragone (SLAC): what we believe is the worst case scenario which this detector will have to operate, but depending on the application that maximum signal can be much smaller. So we could potentially adjust and regain of the very front end based on incoming signal. And that will help us to optimize signal. So same thing could be done, for instance, for the range of acquisition for the filtering function

109
00:28:18.299 --> 00:28:42.089
Angelo Dragone (SLAC): we could control depending on the pixel sees signal or not that it's power consumption. So dynamically, we could adjust the compression level that we put in the data before sending it out. For instance, from the very 1st front end chip, which which, of course, will reduce the the strain that we have to, that we have on the communication coming out from the various.

110
00:28:42.090 --> 00:29:09.529
Angelo Dragone (SLAC): from that chip to the 1st layer of Fp, and so on. And of course, as I was saying before, the complexity of this function dynamic function that we can introduce, and some of them are listed here on the slides will become larger and larger as we move down in the architecture. Go where the Lm computing asset see a larger portion of the focal plate. So versatility goes up as we keep, let's say, processing

111
00:29:09.540 --> 00:29:35.950
Angelo Dragone (SLAC): closer to the data system on the other end, the reduction rate goes up. If we start immediately, doing some processing as close as possible to the sensor as you can imagine there's probably some optimum that we can play there. And and so understanding how to distribute the computing within this network and optimize for energy efficiency is one of the key things that we have to do in this project.

112
00:29:36.100 --> 00:29:59.790
Angelo Dragone (SLAC): and so to try to address these kind of problems that I tried to introduce in the previous slides. We have the defined our project around the 5 trusts. Now, I'm telling you already that the work within each one of these tasks is really.

113
00:29:59.960 --> 00:30:19.041
Angelo Dragone (SLAC): I heavily interconnected with each each other faster. And so really, this is a Co design. We already seen that for the faster that the that started already in the 1st quarter, which are only plus one and 2 and plus 5, we already having joint meetings to try to

114
00:30:19.910 --> 00:30:23.569
Angelo Dragone (SLAC): you know, works to work the

115
00:30:25.380 --> 00:30:32.859
Angelo Dragone (SLAC): work like that we have laid out. So I'm gonna describe which one of these trust, and for each one of these I'm gonna ask

116
00:30:33.126 --> 00:30:40.593
Angelo Dragone (SLAC): some of the to give some information on things that have been done in the past, and that will be continue. We will continue to explore within

117
00:30:40.860 --> 00:31:02.989
Angelo Dragone (SLAC): within this each one of them. So let's start with the 1st trust, which is really we call it dynamic resource, constraint and workflows for adaptive experimentation, and try to deal with the fact that if we look at the current workflows that have been used for the analysis in this kind of experiment. These are usually

118
00:31:03.190 --> 00:31:15.610
Angelo Dragone (SLAC): we use models that are quite large at our angle, so we cannot certainly take this and try to implement them in outlook. We need to be rethink these models in a way that could operate on the resources needed

119
00:31:16.068 --> 00:31:38.990
Angelo Dragone (SLAC): hardware. And so we will try to leverage and optimize existing models for light sources and then potentially applicable to other fields. But we will try to redesign them in a way that can be operated in a resource constrained computing architecture, basically or specific, specifically design accelerators.

120
00:31:40.036 --> 00:31:59.280
Angelo Dragone (SLAC): and we will to to create adaptivity. And this possibility to dynamically adjust the processing function in our architecture. We will try to create the low litency feedback that can act on the control of the network itself.

121
00:31:59.300 --> 00:32:21.490
Angelo Dragone (SLAC): So now, at the beginning of the project, and that is 1st month. What we have started working on is to try to create workflows for different scientific fields to try to really do extract the most common computational elements that will benefit for execution in real time. And these are the ones that we will start doing the project at least a subset of them.

122
00:32:21.860 --> 00:32:48.989
Angelo Dragone (SLAC): and to actually make sure to capture broadly all scientific application. We just conducted a new workshop on data flows from intelligent sensor that was run last Friday, and it really started with the idea of doing something very small, inviting. A few key. People speakers that we thought could represent pretty much a large, broad set of workflows. The interest

123
00:32:48.990 --> 00:33:12.730
Angelo Dragone (SLAC): was quite high, so it turned out to become a full day workshop with 67 people that attended and that are very excited to continue to actually participate in this project. So the team is actually growing. In particular. This trust is led by Yana Fair and with Ryan coffee. And I'm gonna ask now, Jana.

124
00:33:12.730 --> 00:33:36.629
Angelo Dragone (SLAC): to talk a little bit on the next slide about the data reduction pipeline, which is the system that we are designing here. It's like for the execution of data flows for Cls. And then we will move to transcript. You wanna talk on the next slide.

125
00:33:36.630 --> 00:33:37.290
Jana Thayer: Yes.

126
00:33:37.860 --> 00:33:50.430
Jana Thayer: okay. So a little bit about how I learned to stop worrying and love the data deluge. We built this heterogeneous Lcls to data analysis, pipeline.

127
00:33:50.430 --> 00:34:11.489
Jana Thayer: And it's designed to handle the extreme data rate from Lcls 2 detectors for a variety of Lcls experiment types. So one of the distinguishing characteristics between this pipeline and this. It looks very similar to the sorts of pipelines you might see in Hep in that there's a layer for triggering, a layer for filtering, and

128
00:34:11.550 --> 00:34:41.180
Jana Thayer: and then layers for processing. But one of the key distinguishing characteristics of Lcls is that it is very dynamic. We have different experiments every week and things change. So we built this pipeline to handle that adaptability, the ability to change things. So when we talk about co-design, the detector is just a piece in the chain. So we want to make a more adaptable detector to go with an adaptable data pipeline that stretches from the insides of the detector all the way out to Hpc.

129
00:34:42.442 --> 00:34:45.929
Jana Thayer: this, this pipeline combines

130
00:34:46.159 --> 00:35:05.289
Jana Thayer: asics, Fpgas, Fpgas and Gpu near the detector. And in the data reduction pipeline, which is the 1st computing layer that sees the data. The data then gets written to disk in this fast feedback layer. And there's additional computing there to do some dedicated data quality monitoring and fast feedback.

131
00:35:05.290 --> 00:35:25.209
Jana Thayer: And then the data gets transferred out to offline where on-site computing and storage is available as well as offsite, which means that we can leverage facilities like Nersc and Oak Ridge leadership class facilities and Argon leadership class facilities to train models, retrain models and analyze data.

132
00:35:25.790 --> 00:35:38.540
Jana Thayer: And so we are basically combining these Asics for near detector feature, feature, extraction or data massaging information, extraction, Fpgas and Gpus.

133
00:35:38.780 --> 00:35:52.949
Jana Thayer: with the features that are available in Hpc. To make an end-to-end pipeline that is fully adaptable to all of the changing experiment conditions and the changing experiments at Lcls.

134
00:35:52.950 --> 00:36:11.859
Jana Thayer: So there are some new opportunities here to more rapidly convert data to science by moving some pieces of our algorithms into the detector. As mentioned before the detector. You know there is not as much real estate available there as it is in Hpc. But all of the algorithms and workflows that we run and have developed and are familiar with

135
00:36:12.090 --> 00:36:17.149
Jana Thayer: are developed in an Hpc. Or large compute farm environment

136
00:36:17.260 --> 00:36:38.009
Jana Thayer: on fully calibrated, fully assembled images. So you know, we are not used to thinking about the images as being something that can live and breathe and change in response to stimuli. So this is a new thing for us. And so the 1st steps are just trying to figure out which pieces of our algorithms we can move online to handle the data as it looks when it comes off the sensor

137
00:36:38.190 --> 00:36:50.569
Jana Thayer: and try to make these algorithms deployable across asics which are low power systems and fpgas and and provide us with low latency feedback. There.

138
00:36:50.680 --> 00:37:14.780
Jana Thayer: We also want to put complex Ml. Or some Ml. Into the edge, and that means taking very large models, which are often several millions to billions of parameters, and squeezing them into an itty, bitty, living space inside of an Asic or Fpga. And that itself is a difficult trick. But it can be done using some techniques that we've

139
00:37:14.780 --> 00:37:23.109
Jana Thayer: piloted in Lcls 2. And so we have some methodology to take large models, make them smaller

140
00:37:23.110 --> 00:37:48.010
Jana Thayer: and then fit them into an edge device so that will allow us to optimize the pipeline in new ways. What we can then do with this is experiment steering. So we can use this to generate feature, extracted data that we can use to tune the experiment. To take the data we want only the data we want, and none of the data that we don't want, and ultimately will reduce the time to science, because it means you don't have to process the data more than once you are

141
00:37:48.010 --> 00:37:57.380
Jana Thayer: producing a data product at the beginning of the pipeline. And you don't have to use additional computing later in the pipeline to produce that data product. Again.

142
00:37:57.740 --> 00:38:14.379
Jana Thayer: it also allows us to dynamically adapt to what we're seeing in the detector. So if a new feature appears or some new experimental condition arises, the detector might be able to adapt to it and and react appropriately in the context that we provide it.

143
00:38:14.550 --> 00:38:39.500
Jana Thayer: And once again, we rely on this connection to the local s. 3 df. And remote computing resources. By implementing all of this at the edge. It means we will mitigate network and storage costs downstream, and it means we will reduce the amount of computing we need to do later. That just frees us up to do more and smarter and more powerful things in the computing later on.

144
00:38:39.500 --> 00:38:57.520
Jana Thayer: But overall, you know, it is really looking at this as a holistic system and a combination of strategies that allow us to handle these extreme data rates, and now pushing some of that adaptability and expandability and scalability deep, deep into the separate layers of the detector itself.

145
00:38:57.870 --> 00:39:01.010
Jana Thayer: So I will end there and pass it to the next.

146
00:39:01.010 --> 00:39:12.400
Angelo Dragone (SLAC): All right? So thanks, yeah. And as, of course, we identify this computational elements that we wanna push into our sensing network.

147
00:39:12.400 --> 00:39:36.889
Angelo Dragone (SLAC): we need to understand what is the best computational platform that we can to introduce in the various elements of the of the of the network. And this is actually, what trust 2 will study. And as you can see here, there's many people from different institutions working on this, and they have worked on as computing architectures in the past. In particular, people from

148
00:39:36.890 --> 00:39:44.080
Angelo Dragone (SLAC): Livermore, as well as some we have done some work with a slack, and

149
00:39:44.120 --> 00:39:57.440
Angelo Dragone (SLAC): I will describe. We'll describe now a couple of things that, as an as example of preliminary work that was done in this direction, and then I will start asking to talk about the

150
00:39:57.440 --> 00:40:22.149
Angelo Dragone (SLAC): one of these architecture that we developed. Then I'll pass it to the auto calibration in real time, and some some of the aspects that we're studying here at slack. And then I want to ask Sadas Shankar to talk a little bit about his methodology for benchmarking, computing architectures

151
00:40:22.160 --> 00:40:26.240
Angelo Dragone (SLAC): that we hope to extend the 2 edge computing architecture. This project.

152
00:40:26.390 --> 00:40:28.580
Angelo Dragone (SLAC): So we'll start with Dino.

153
00:40:32.180 --> 00:40:39.420
Nino Miceli (APS/ANL): Yeah. So yeah, as a 1st example, we've worked on as as Angela has mentioned, sort of in prior work, we've

154
00:40:39.600 --> 00:40:56.680
Nino Miceli (APS/ANL): and looking at how to tackle the data deluge problem sort of at the at the source right at the front end. Asic. So so if you consider sort of just run some numbers, you know, 1 million frames per second in one asic, 200 by 200 pixels, 12 bits. It's it's almost 500 gigabits per second.

155
00:40:57.070 --> 00:40:59.900
Nino Miceli (APS/ANL): It's quite, quite difficult to handle, even

156
00:41:00.170 --> 00:41:26.330
Nino Miceli (APS/ANL): from when you're dealing with Sfp. And stuff like this and networking. So so what we've what we've looked at is, can we? You know, can we? There's still quite a ways for our community scientific detectors to ride Moore's law. Now we're going from like 1, 30, 65 to 28. So it becomes more feasible to start incorporating more particular digital functionality from the computing domains directly into detector silicon.

157
00:41:27.390 --> 00:41:45.979
Nino Miceli (APS/ANL): as mentioned before Angela and Yana, that at the front end you're very, very resource, constrained. So you have to be very careful what you design. And here co-development with co-design with between the detector developers and the domain sciences is very critical to understand

158
00:41:47.530 --> 00:42:11.070
Nino Miceli (APS/ANL): what can be done and where the losses are so in the Sparkfix Rt project. We explored in particular, one of these digital computing domains functionality in particular compression on X-ray science data applications. So on the left is sort of shown various different Lossy compression schemes, and how their effect is manifest in actual reconstructed data.

159
00:42:11.140 --> 00:42:19.370
Nino Miceli (APS/ANL): And this has been work. That's yeah. So very tight collaboration and tight feedback between domain scientists, the detector developers and then

160
00:42:19.580 --> 00:42:26.480
Nino Miceli (APS/ANL): actually implementing that into something that's something in hardware.

161
00:42:27.841 --> 00:42:35.040
Nino Miceli (APS/ANL): Yeah. And the next slide too fast too far.

162
00:42:36.420 --> 00:42:43.769
Nino Miceli (APS/ANL): So just here are some details about this project. So so we've this is basically the 1st X-ray detector Asic that has

163
00:42:44.080 --> 00:42:46.170
Nino Miceli (APS/ANL): on-chip calibration and compression.

164
00:42:46.601 --> 00:43:12.658
Nino Miceli (APS/ANL): As I was mentioned, sort of this is a product, is more or less complete, is sort of final stages, but basically a collaboration where, between slack and and argon, and sort of combining the pixel matrix with the analog compression and and there's some hardware that we're building and and testing at the beam lines very soon. So, moving forward, one of the one of the issues here is on the on the right. You can see that

165
00:43:13.300 --> 00:43:25.459
Nino Miceli (APS/ANL): one of the things that maybe doesn't scale very well is Sram with technology. So Sram takes a lot of space compared to just the digital logic. So trying to trying to come up with algorithms that

166
00:43:25.610 --> 00:43:27.939
Nino Miceli (APS/ANL): minimize the amount of memory.

167
00:43:28.310 --> 00:43:33.300
Nino Miceli (APS/ANL): Another thing that's typically when you think of compression, most compression techniques are variable length.

168
00:43:33.720 --> 00:43:38.180
Nino Miceli (APS/ANL): output, variable link data, and that generally becomes very complicated. So

169
00:43:38.380 --> 00:44:07.480
Nino Miceli (APS/ANL): practical reasons and other things. So can we move to sort of a fixed length, compression and and and and not, and also think about not only just compressing the data because oftentimes you have to decompress the data downstream. But can you actually do data, reduction or feature extraction and sort of, and finally, sort of can you bring the detector in the loop by getting feedback from the experiment to have the detector change dynamically with some, with some analysis that's done downstream.

170
00:44:08.080 --> 00:44:08.720
Nino Miceli (APS/ANL): So.

171
00:44:12.600 --> 00:44:22.221
Dionisio Doering (SLAC): Yeah. So continue, with this collaboration between argon and and slack, we started thinking about how to

172
00:44:22.840 --> 00:44:49.779
Dionisio Doering (SLAC): understand and study the systems that are actually adaptable. So one of the things that we decided as part of this collaboration was, Hey, we need the nasic that is compatible with the with things being adapted. So the 1st thing that we decided is that we needed to study and make a prototype of an Asic that has this high speed reconfiguration aspects.

173
00:44:49.820 --> 00:45:14.810
Dionisio Doering (SLAC): And this is ongoing. It includes, like a matrix that is capable of having a sensor bump bounded to it with Csa and Adcs. The output of this Asic, then, is sent to some sort of processing unit which takes the data and then reasons about the data and make this high speed reconfigurations. The idea here is not to do

174
00:45:14.810 --> 00:45:39.040
Dionisio Doering (SLAC): a frame by frame reconfiguration, but the idea is that between 2 frames, we would be able to change the State from the previous one to the next one, in a way that we don't lose necessarily rate in parallel with that. What we need is to understand how we can reason about the incoming data. And we plan on using AI models or engines

175
00:45:39.040 --> 00:46:04.029
Dionisio Doering (SLAC): that would take the live data, process them, and from time to time output a new state of the system, such that we reconfigure the Asic. And this is sort of our framework on an adaptive detector. So this would enable us to study different algorithms, different data types. And also how we implement that in the hardware, whether it's a single asic.

176
00:46:04.030 --> 00:46:21.600
Dionisio Doering (SLAC): I am highlighting here, or maybe into a distributed system where many asics can talk to each other. Many Fpjs can talk to each other, and so on, and so forth. And one of the things that we realized very early is that we do not necessarily have the detectors. They adapted

177
00:46:21.740 --> 00:46:39.790
Dionisio Doering (SLAC): adaptive detectors in hands. Which means that we need to figure out a way to simulate them in a way that we can then start studying that without actually having the real heart. If we go to the next slide, then we can look at.

178
00:46:39.790 --> 00:47:04.159
Dionisio Doering (SLAC): I think there was an animation that didn't quite work. Here we can start from the left side on thinking about the science cases. So with your ideal sample, you can actually simulate. And this simulator is a work that has been done as part of a collaboration or before actually, the collaboration. But it's a work that Yana's group has been worked

179
00:47:04.160 --> 00:47:33.019
Dionisio Doering (SLAC): with many other people that allows us to simulate single particle images. Off of that you get an ideal image. And off of this ideal image we can start introducing the adaptations that will make the ideal image into a raw image. That is where we converge from sort of simulation into what I call a raw image. In this example, we take this raw image, apply calibrations because we just

180
00:47:33.661 --> 00:47:41.489
Dionisio Doering (SLAC): let's say, created a image that is not ideal by adding noise, by adding the hot pixels that fix us, and so on and so forth.

181
00:47:41.550 --> 00:48:01.510
Dionisio Doering (SLAC): With the calibration, we get to the state where we currently do things where we have a calibrated image that can go be reconstructed, and there are a couple more modules here, and then we can compare the ideal image with the real image. Evaluate that, and have some sort of feedback

182
00:48:01.903 --> 00:48:18.430
Dionisio Doering (SLAC): so this means not yet, Angela. Yes, thank you. So this means we can go from simulation to real image, and from real images simulated real images back into simulations. But we still don't have a way to change the system. So

183
00:48:18.770 --> 00:48:28.630
Dionisio Doering (SLAC): one way that we are considering right now is to use reinforcement learning to get the raw image segment that, as part of the pre-processing.

184
00:48:28.640 --> 00:48:51.859
Dionisio Doering (SLAC): understand what areas of the images can be useful for the questions we're posing to the system we can use also as a reward. The quality of the image could be. The energy that the asics is is is consuming as well, and we can use different weights for the different things. To try to create an optimization and then take an action at the detector level.

185
00:48:51.860 --> 00:49:07.309
Dionisio Doering (SLAC): This way it allows us to reason with the different aspects, the different data sciences, and the different transformations in a way that we can anticipate the things that we are going to see once the prototype and future detectors arrive.

186
00:49:08.068 --> 00:49:14.200
Dionisio Doering (SLAC): I show here transformations that are at the detector level itself. So

187
00:49:14.200 --> 00:49:32.140
Dionisio Doering (SLAC): think about bias that influences the energy. It could be the Adc. Resolution. It could be many other things. And this is part of what Lorenzo and the team on Trust 3 are studying. What can be transformed here? We're trying to figure out, how can we reason about the data and make actions

188
00:49:32.590 --> 00:49:50.629
Dionisio Doering (SLAC): but this does not necessarily mean changing the detector itself. If we go to the next slide, we can also use this to change things like the calibration parameter or the compression parameters. So instead of making changes to the.

189
00:49:50.720 --> 00:49:57.549
Dionisio Doering (SLAC): to the A to the detector, we can change the post-processing off of it. And then, if we go to the next slide.

190
00:49:57.750 --> 00:50:25.219
Dionisio Doering (SLAC): we can also think about changing the calibration parameters, which was the 1st example that we said, hey? This might be an easier one to try to solve, to solve, which is, can we check if the detector is still calibrated? And if we can figure out if the detector is still calibrated, can we update the calibration to make sure the detector is always calibrated. So we don't run an experiment on a detector that is not calibrated.

191
00:50:25.350 --> 00:50:49.840
Dionisio Doering (SLAC): And all these studied cases that I showed so far, we started with the single particle imaging type of data. But nothing prevents us from going to different types of science cases. And then, if we go to the next slide, this is an example of another simulator that working with another person in Yana's group. And this is a work

192
00:50:49.840 --> 00:51:14.289
Dionisio Doering (SLAC): that is heavily based on the python which was developed at actually at Argon. We can do tomography applications and do exactly the same thing. So I know there are several material scientists and scientists that have simulators and ways of reconstructed samples and use within this frame. So this is a way to start making the decisions and reasons that we do.

193
00:51:15.250 --> 00:51:36.819
Dionisio Doering (SLAC): that is science concerned. So so we have now a way to actually embed the science situation and try to optimize for science, and not for necessarily the calibration or the energy. But what what can we do to make sure that the science out consider as part of our evaluation as we change the detector systems.

194
00:51:36.820 --> 00:52:01.469
Dionisio Doering (SLAC): And then, if we go to the last slide that I am presenting. Here is an example of a initial study that we are doing of a future detector. So we're simulated using this tomography example, a detector that we're going to be delivering next year, which is a pixel. Hr. 35 kiloframes per second, 4 megapixel. We did the simulations with ideal conditions. We then say, Hey.

195
00:52:01.770 --> 00:52:26.749
Dionisio Doering (SLAC): I can position the sonograms into different parts of the detectors, and I can see how the quality of the reconstruction happens. So we created the mask, the proper mask with the proper sizes, and then we moved. Well, 1st I started from the ideal one, then I moved into a position. That is sort of the top left corner and then start scanning it, and you can see how the signal to noise

196
00:52:26.750 --> 00:52:32.400
Dionisio Doering (SLAC): ratio degrades us. We lose information because of the segmentation of the detector. So this is

197
00:52:32.770 --> 00:52:46.910
Dionisio Doering (SLAC): just to show, how we are pushing the platform or the framework into enabling us to reason about the science of this upcoming and feature detectors, and that's what I had to show.

198
00:52:47.862 --> 00:52:57.030
Angelo Dragone (SLAC): So we'll try to leverage all this work for the in and convert it into our race. For the

199
00:52:57.300 --> 00:53:05.719
Angelo Dragone (SLAC): you know we didn't trust to. We didn't trust you also, I was saying. We will try to benchmark the architecture, for

200
00:53:05.990 --> 00:53:18.449
Angelo Dragone (SLAC): as the information, extraction, efficiency, and energy efficiency, and I want to give Sadashank the opportunity to talk about his work on on benchmarking. The computing architecture.

201
00:53:18.450 --> 00:53:20.332
Sadas Shankar: Yes, thanks, Angelo.

202
00:53:21.080 --> 00:53:30.350
Sadas Shankar: so this actually, this slide shows 3 figures, and I will try to walk you through. What is it we are trying to do

203
00:53:31.030 --> 00:53:36.999
Sadas Shankar: on the left hand side is essentially looking through the entire computing stack.

204
00:53:37.220 --> 00:53:39.920
Sadas Shankar: If you look at it, it kind of

205
00:53:40.200 --> 00:53:47.659
Sadas Shankar: crystallizes into 3 different levels. One is more at the switching level, the transistor level.

206
00:53:47.940 --> 00:53:59.179
Sadas Shankar: which is less applicable for this project. The second one is more at the instructional level, so that essentially represents hardware, architecture and circuits.

207
00:53:59.220 --> 00:54:20.319
Sadas Shankar: and the last one is the simulation level. So the Y-axis you have on the 1st plot on the left is energy in joules, and it is kind of showing what I just explained to you. So that is essentially the summary of that is, the entire computing stack can be abstracted into these 3 levels.

208
00:54:20.600 --> 00:54:26.230
Sadas Shankar: For this particular project levels 2 and 3 are more relevant

209
00:54:26.460 --> 00:54:36.209
Sadas Shankar: than the level one, which is more closer to the transistor. Now, if you go to the right hand side, more figure, you can essentially

210
00:54:38.430 --> 00:55:03.339
Sadas Shankar: look at computing in terms of 4 different domains, and the 4 different domains are hardware systems which are mostly in microelectronics, quantum information based and nature inspired. And also the algorithms and software. The numbers, you see are the headroom for energy efficiency

211
00:55:03.800 --> 00:55:09.510
Sadas Shankar: that is possible along these 3, 4 along these 4 domains.

212
00:55:09.710 --> 00:55:18.110
Sadas Shankar: So, as I mentioned, there are 3 levels of computing and 4 domains in terms of computing.

213
00:55:18.150 --> 00:55:32.449
Sadas Shankar: In this particular project, we will be mainly looking at the microelectronic systems represented in red and algorithms and software. So how are we going to go from these 2 levels of computing on the left figure

214
00:55:32.450 --> 00:55:49.039
Sadas Shankar: to the right figure in which we are going to be essentially having energy decrease. And that is what the middle figure is showing. So what we are trying to set up here is a way of looking at different algorithms that is dividing up the algorithms into different components

215
00:55:49.080 --> 00:56:16.969
Sadas Shankar: and essentially estimating the energy it would use on different hardware architectures. You actually heard the team talk about actually 3 different architectures, a 6 close to the edge, then Fpga for timing, and then Gpu for heavier computing. So you have actually 3 architectures. And then you listen to Dnsio, talk about 4 algorithms, at least

216
00:56:17.080 --> 00:56:42.679
Sadas Shankar: compression, reconstruction, segmentation and data simulation. So what we would be doing is we are setting up an exhaustive list of components of algorithms that we are going to benchmark for energy. And then these are going to be put together in the specific workflows that you heard mentioned and measure the energy.

217
00:56:43.085 --> 00:56:43.490
Sadas Shankar: because

218
00:56:43.490 --> 00:57:06.519
Sadas Shankar: that will essentially feed back into saying what is the right partitioning of the algorithms? Onto what hardware platforms that you have. The 3 hardware platforms you heard, and about 4 algorithms. In fact, there's more than 4 algorithms in this project. But that partitioning will be done with energy efficiency as the key goal.

219
00:57:07.100 --> 00:57:10.059
S. J. Ben Yoo: So that's so, that's all I had.

220
00:57:10.060 --> 00:57:23.488
Angelo Dragone (SLAC): Thanks. Let me go very quickly through the other trust, so that also later in the hour already focused to

221
00:57:24.654 --> 00:57:44.420
Angelo Dragone (SLAC): aspects that are either, related content. As I said before, we want to try to move away from architecture that are designed based on worst case scenarios and try to introduce the possibility to choose them as the difference.

222
00:57:44.420 --> 00:58:10.380
Angelo Dragone (SLAC): That's what trust we will study. I will show you a slide that tries to explain this a little bit better. Trust 4 instead, is going to look at the way we communicate within the network. As I said in my very 1st slides. Nowadays the communication is just upwards from a sensing node to the layer system. We will optimize this by introducing communication across devices.

223
00:58:12.000 --> 00:58:15.189
Angelo Dragone (SLAC): years of the detector, and we will try to leverage.

224
00:58:15.270 --> 00:58:16.040
S. J. Ben Yoo: So that's.

225
00:58:16.170 --> 00:58:33.849
Angelo Dragone (SLAC): Move away from synchronous communication which it technically usually shows that

226
00:58:34.160 --> 00:58:37.759
Angelo Dragone (SLAC): and be done with a minimal power consumption.

227
00:58:37.800 --> 00:59:05.189
Angelo Dragone (SLAC): And so this 2 trusts have not started yet, so I don't know to show but one thing, that when I like that within this project we will try to prototype some of this solution. And this will be done within talking about the front end. And the idea is basically to try to make from and highly parametric

228
00:59:05.230 --> 00:59:11.149
Angelo Dragone (SLAC): in a way that then we can model them and include them into a trading algorithm.

229
00:59:11.220 --> 00:59:17.800
Angelo Dragone (SLAC): Nowadays typically this content are static again, and the training happens in whatever comes after

230
00:59:18.390 --> 00:59:43.099
Angelo Dragone (SLAC): to try to include the front end in this kind of optimization. For for now that's probably ongoing in the various institution some of the things that we did here at slack is to design basically a chain and introduce parametric elements, the front end amplifiers in the ABC. And also in the field

231
00:59:43.100 --> 00:59:51.449
Angelo Dragone (SLAC): that now we're trying to move to digital. And these are some simulation results. We can expand on these aspects, and maybe in future.

232
00:59:51.450 --> 00:59:53.169
Angelo Dragone (SLAC): in future meetings.

233
00:59:53.755 --> 01:00:08.329
Angelo Dragone (SLAC): The last cluster is, is that more related to materials and devices? As I said, we want to make sure that within this project we consider also elements of sensing as they

234
01:00:09.467 --> 01:00:26.529
Angelo Dragone (SLAC): sensors, of course, are extremely, will be applicable in extreme environments, and also to detect.

235
01:00:29.236 --> 01:00:39.019
Angelo Dragone (SLAC): And so there's a study.

236
01:00:39.990 --> 01:00:46.969
Angelo Dragone (SLAC): Sensors, and the work on this is just started, and it has 2

237
01:00:49.540 --> 01:01:09.649
Angelo Dragone (SLAC): kind of sense of our extreme environments. And, of course, for for the application that we have considered in particular, high energy, faster response. And then we will study them in in operational conditions.

238
01:01:10.270 --> 01:01:28.880
Angelo Dragone (SLAC): And of course, then, the architecture of the sensor itself will be co-designed together. initial studies have been initial steps towards this characterization. Some of the examples of the things that that that.

239
01:01:29.400 --> 01:01:30.010
S. J. Ben Yoo: Okay.

240
01:01:30.416 --> 01:01:38.540
Angelo Dragone (SLAC): Sorry. That's why they started discussing in their 1st meeting in the 1st quarter. But of course this is what.

241
01:01:40.016 --> 01:01:41.493
S. J. Ben Yoo: Maximum.

242
01:01:42.970 --> 01:02:03.350
Angelo Dragone (SLAC): This is the timeline for the project, and we just wanna show it to you. So as we are basically here at this point, the only started the bulk of the work will be will happen next fiscal year. And actually the next 2 fiscal year. And then, in the last part of the project. We will characterize this prototype.

243
01:02:03.700 --> 01:02:10.429
Angelo Dragone (SLAC): and that's close here, just showing the list of all the participants of these projects. And so, for my knowledge.

244
01:02:11.560 --> 01:02:12.280
S. J. Ben Yoo: Hold on!

245
01:02:14.200 --> 01:02:27.610
Angelo Dragone (SLAC): Are engaging with us, and they offer collaboration, even if they were not the original proposal. And yeah, we are very close to the end of this meeting. So if there's some quick question we're happy to answer.

246
01:02:46.680 --> 01:03:00.840
jsnelso: Angela, this is Jeff. Just one quick comment your last thrust and gan. And now, gan, there may be a lot of radiation effects data here at Sandia, not necessarily on our project.

247
01:03:01.120 --> 01:03:07.509
jsnelso: But there's lots of work going on there that might be of use to you guys that I could connect you with.

248
01:03:07.510 --> 01:03:16.999
Angelo Dragone (SLAC): Yeah, I'm sure that could be isolated in the future to

249
01:03:17.210 --> 01:03:22.880
Angelo Dragone (SLAC): explore a po- possibility to, to collaborate, and we leverage your facilities as well.

250
01:03:23.190 --> 01:03:23.920
jsnelso: Sure.

251
01:03:32.820 --> 01:03:34.560
Angelo Dragone (SLAC): Have a question or comment.

252
01:03:36.060 --> 01:03:36.980
S. J. Ben Yoo: Come on!

