WEBVTT

1
00:00:03.440 --> 00:00:07.593
Valerie Taylor: Okay, I think I'm in the right one. I'm the only one here so far.

2
00:00:09.650 --> 00:00:11.330
Valerie Taylor: Let me just see.

3
00:00:15.060 --> 00:00:16.650
Valerie Taylor: Yeah, this is the one.

4
00:00:17.000 --> 00:00:19.799
Valerie Taylor: But I let's see if others join.

5
00:00:23.610 --> 00:00:25.500
Valerie Taylor: Yes, there we go.

6
00:00:28.100 --> 00:00:29.160
Valerie Taylor: Hello.

7
00:00:34.780 --> 00:00:35.710
Christian Mailhiot (Sandia): Hello!

8
00:00:36.140 --> 00:00:37.090
Valerie Taylor: Hi.

9
00:01:02.850 --> 00:01:07.510
Valerie Taylor: okay, sweet is when I reported.

10
00:01:17.570 --> 00:01:18.780
Paul McIntyre: Hey! Valerie?

11
00:01:19.660 --> 00:01:21.480
Valerie Taylor: Hi, Paul, how are you?

12
00:01:21.480 --> 00:01:23.379
Paul McIntyre: Doing? Well, thanks. How about yourself?

13
00:01:23.380 --> 00:01:24.690
Valerie Taylor: I'm doing well.

14
00:01:24.920 --> 00:01:25.400
Paul McIntyre: Good.

15
00:01:29.880 --> 00:01:30.600
Valerie Taylor: S.

16
00:01:34.200 --> 00:01:35.300
Valerie Taylor: Hi, mark.

17
00:01:38.230 --> 00:01:39.350
Mark Hersam: Hey, Valerie, how you doing.

18
00:01:39.740 --> 00:01:44.629
Valerie Taylor: Good. So Salmon and I are at the Tpc. Meeting in San Jose.

19
00:01:45.120 --> 00:01:45.800
Mark Hersam: Gotcha.

20
00:01:46.510 --> 00:01:47.120
Valerie Taylor: Yes.

21
00:01:52.160 --> 00:01:53.840
Valerie Taylor: I'm gonna take off my name.

22
00:01:54.170 --> 00:01:54.850
Valerie Taylor: Oops.

23
00:01:57.120 --> 00:01:57.970
Valerie Taylor: Great.

24
00:02:00.780 --> 00:02:06.279
Valerie Taylor: But yeah, just put everybody's. I think you're right. I did this.

25
00:02:07.900 --> 00:02:14.605
Valerie Taylor: So I in my slide. I didn't see what you were referring to. Oh, see right here.

26
00:02:19.620 --> 00:02:20.810
Valerie Taylor: and then see?

27
00:02:21.280 --> 00:02:24.390
Valerie Taylor: Oh, man, okay, thank you.

28
00:02:24.500 --> 00:02:26.459
Valerie Taylor: And then I took out.

29
00:02:26.630 --> 00:02:28.300
Valerie Taylor: You have one here, too. So

30
00:02:28.510 --> 00:02:30.700
Valerie Taylor: and then I have this with Andrew.

31
00:02:41.140 --> 00:02:42.240
Valerie Taylor: Okay.

32
00:03:01.860 --> 00:03:02.550
Valerie Taylor: oh.

33
00:04:11.090 --> 00:04:12.879
Valerie Taylor: let me make it

34
00:04:57.200 --> 00:04:59.039
Valerie Taylor: all right. Let me know.

35
00:05:00.660 --> 00:05:02.429
Valerie Taylor: Let's see, Andrew.

36
00:05:23.540 --> 00:05:28.600
Paul McIntyre: Well, we're we're 5 min in Valerie, I think. Should we get started.

37
00:05:28.750 --> 00:05:31.980
Valerie Taylor: We can. We can start whenever you're ready.

38
00:05:31.980 --> 00:05:33.399
Paul McIntyre: Great. Thank you.

39
00:05:34.570 --> 00:05:41.040
Valerie Taylor: Okay, so I'll go ahead and bring up my slides. And so

40
00:05:42.550 --> 00:05:46.070
Valerie Taylor: let's see, are you able to see my slides and let me just

41
00:05:47.140 --> 00:05:49.320
Valerie Taylor: bring it up in presentation, mode.

42
00:05:51.050 --> 00:05:52.120
Paul McIntyre: It looks good.

43
00:05:52.120 --> 00:06:10.909
Valerie Taylor: Okay, so that's good. And we have the the Bea leadership here as well. So we'll do a team presentation. So thank you. So we're happy to present to Meerkat in terms of the center about Bia, let me just

44
00:06:11.240 --> 00:06:34.850
Valerie Taylor: okay. And that's looking at a co-design methodology to transform materials and computer architecture research for energy efficiency. So Bia is not an acronym for the project, but just the name of the project. And it's because it's a Olympian goddess for power and energy.

45
00:06:36.490 --> 00:06:44.938
Valerie Taylor: So, and just in terms of a little bit more about what we want to do.

46
00:06:45.580 --> 00:07:14.160
Valerie Taylor: So I'm going to just give an overview of the project. And then within Bia we have 3 thrust, and that's the application thrust, simulation and architectures, thrust and materials and devices. And so Solomon is going to talk about the application. We also have Andrew to talk about simulation and architectures, and then mark with the materials and devices.

47
00:07:15.720 --> 00:07:36.719
Valerie Taylor: So just to give an overview of the Bia team. So we have excellent researchers that really reflects a multidisciplinary aspect of the Bia project. And that's where we have researchers from Argonne, also University of Chicago, Northwestern

48
00:07:36.720 --> 00:07:52.660
Valerie Taylor: Fermilab, as well as Rice University. So a great team that covers all the areas of materials to architectures, to also system software as well as the applications.

49
00:07:54.240 --> 00:08:19.359
Valerie Taylor: So just in terms of that overview. When we look at end-to-end co-design, we look at it in terms of having that opportunity to really frame the way we do research in devices and materials. And so if we go back to the 2018 Brn report, where we had all those different levels of the stack

50
00:08:19.360 --> 00:08:27.729
Valerie Taylor: going from. And that's the design stack going from materials and chemistry all the way up to applications.

51
00:08:27.870 --> 00:08:52.790
Valerie Taylor: It's recognized that it's often the case that we are aware of the relationships between adjacent layers of the stack. But when it comes to non adjacent levels of the stack. That's where there's complications in terms of really understanding direct relationships between non adjacent

52
00:08:52.790 --> 00:09:16.830
Valerie Taylor: levels of the design stack. And by that meaning to look at the metrics and features that are important between those non adjacent levels, because oftentimes, if you go between from one level just to the adjacent to the next level. Some information or some aspects may be lost. But you want to have that direct relationship.

53
00:09:17.060 --> 00:09:29.979
Valerie Taylor: also understanding to the technologies involved and how they're being used by again that non-adjacent level and then representing those relationships.

54
00:09:30.637 --> 00:09:40.039
Valerie Taylor: As cost functions. And so those are some of the challenges that we're looking at with respect to the be a project.

55
00:09:41.440 --> 00:09:48.310
Valerie Taylor: And let me ask Paul if you can let me know if we have any questions, because I I can't see the the questions.

56
00:09:48.310 --> 00:09:50.359
Paul McIntyre: Will do no questions yet.

57
00:09:50.360 --> 00:10:14.099
Valerie Taylor: Okay, thank you. And so what we want to do in terms of understanding those relationships with non adjacent layers that we hypothesized that if we look at the co-design for 2 vertical design problems, we can use that to

58
00:10:14.180 --> 00:10:36.139
Valerie Taylor: identify and develop a co-design methodology that can be that can be tailored to other vertical design problems. And so here, that's where we're looking at our approach of producing innovative devices, architectures, algorithms that's really tailored to DOE science

59
00:10:37.460 --> 00:10:45.340
Valerie Taylor: so just to talk a little bit more about the design problems, the 2 vertical design problems.

60
00:10:45.630 --> 00:10:49.640
Valerie Taylor: They're both focused in the area of detectors.

61
00:10:50.010 --> 00:11:14.129
Valerie Taylor: And so the reason for focusing on detectors is that it's recognized within the next 2 decades that there'll be 3 orders of magnitude increase in the data rates. And that's in the data that's being produced by the detectors. And so we look at 2 particular design problems.

62
00:11:14.240 --> 00:11:37.990
Valerie Taylor: And one is related to data reduction where you're identifying the important features with that data reduction and looking at the transport of that reduced data. And that's for both high energy physics and X-ray science type detectors. So 2 different types of detectors

63
00:11:38.120 --> 00:12:07.119
Valerie Taylor: and then related to X-ray science. We're looking at the ability to have some steering, and that's where where we can do some analysis close to the detectors. So we can have the feedback. And we're looking at with those 2 design problems, looking at that in terms of from materials all the way up to applications for our co-design approach.

64
00:12:09.520 --> 00:12:33.569
Valerie Taylor: So when we look at that, just to go a little bit more into detail with the co-design methodology, we recognize that it's a continuous process. So that while we apply the co-design methodology to those 2 different problems that it's important that we continuously look at

65
00:12:33.570 --> 00:12:41.950
Valerie Taylor: what we're doing and how we're translating features from one level to another level that's not adjacent.

66
00:12:41.950 --> 00:13:01.259
Valerie Taylor: And so we're extracting those important features that are important across the stack and developing those cost models. And we're looking at in having those cost models. It allows us to explore trade-offs, and it may be

67
00:13:01.270 --> 00:13:26.030
Valerie Taylor: trade-offs between performance with power, also size, accuracy. And when we look at performance, that's where we're looking at different metrics. Those metrics can be. For example, data rate, it can be latency. It can be time, so different aspects that we're considering it could also be the use of memory.

68
00:13:26.770 --> 00:13:35.169
Valerie Taylor: So with our different resources. So that's going into a little more details about what we mean with the co-design.

69
00:13:36.450 --> 00:14:00.170
Valerie Taylor: So when we look further at the objectives, that's where 1st we have 2 objectives, one in terms of looking at those 2 vertical design problems where we're making advances at all the levels in terms of materials, devices, computational architectures.

70
00:14:00.260 --> 00:14:15.599
Valerie Taylor: exploring, you know, different type of architectures as well as the software, to map with the applications to enable new applications and also radically improved energy efficiency.

71
00:14:15.910 --> 00:14:30.649
Valerie Taylor: Then, in addition to solving those 2 design problems. That's where we're looking at. Also that co-design methodology of translation between non-adjacent levels of the design stack.

72
00:14:32.830 --> 00:14:45.149
Valerie Taylor: So just to go a little bit more into detail about each area. So, for example, with the he with the applications area, we're looking at high energy physics, X-ray science.

73
00:14:45.150 --> 00:15:03.060
Valerie Taylor: Where, when we look at the increasing data rates, what does that mean in terms of the advances needed in materials? What does that mean in terms of advances needed in the communication capabilities, the computing system, the algorithms.

74
00:15:03.060 --> 00:15:16.459
Valerie Taylor: And you know, there's a need to also do at sensor data reduction and doing that sensor to feature, extraction and some analysis, so that we can go back and do the steering.

75
00:15:16.600 --> 00:15:36.719
Valerie Taylor: And also when we look at that aspect of the applications we'll talk about in a moment. What does that mean with different levels? And as we have different architectures, what does that mean in terms of the applications and what they could do differently, that they're not doing. Now

76
00:15:37.950 --> 00:15:50.239
Valerie Taylor: when we look at simulations and architectures, that's where here we're leveraging new materials and devices where we're looking at many different levels, the circuits.

77
00:15:50.640 --> 00:16:17.769
Valerie Taylor: the interconnects, the computing components we're also looking at. In addition to the computing components, the software models. And so here, we're looking at those aspects to divine, to define new hardware and software computational architectures. And then we're looking at optimizing across those components focused on the energy efficiency.

78
00:16:19.300 --> 00:16:44.020
Valerie Taylor: When we go to materials and devices. That's where there's work that's currently focused on neuromorphic materials and devices. And that's meeting the needs having to do with low power data reduction. But we're also exploring materials and also photonic design related to optical interconnects.

79
00:16:46.830 --> 00:17:11.340
Valerie Taylor: Then, if we start to look at now, some of those relationships. That's where we're translating, for example, what's needed with the application requirements into what does it need in terms of the different computational components. What does that mean in terms of also how algorithms are mapped to those components. So the system software.

80
00:17:11.510 --> 00:17:39.640
Valerie Taylor: then there's also the feedback about algorithms that are used with respect to detectors, it may be where, depending upon the components we have, we should utilize different type of algorithms to better utilize those components. And again, now that we're looking at those relationships, always looking at the cost functions to explore the trade-offs.

81
00:17:40.780 --> 00:18:06.289
Valerie Taylor: Then let's just look at, for example, the co-design approach between, for example, the simulation and architecture, level and materials and devices. And that's where we're looking at, you know. What does it mean in terms of the characterization of the materials that's used for the devices. How does that translate into the computational components?

82
00:18:06.550 --> 00:18:16.870
Valerie Taylor: Then, with respect to the system software requirements, what does that mean in terms of the computational components that are being used.

83
00:18:17.200 --> 00:18:23.930
Valerie Taylor: And then there's again those cost functions coming up as we express the relationships.

84
00:18:25.110 --> 00:18:47.629
Valerie Taylor: Then, lastly, just want to mention between application and material and devices. And this, again, the question that comes up is, as we look at the future needs, especially in terms of the detectors. What does that mean in terms of materials and devices? Are there some unique aspects and pain points

85
00:18:47.630 --> 00:18:57.579
Valerie Taylor: with respect to those detectors that's looking in terms of the future needs that can actually be aided by

86
00:18:57.680 --> 00:19:09.540
Valerie Taylor: research in the material space. Then we also have again those cost models to represent those relationships that can be used to explore the trade-offs.

87
00:19:10.820 --> 00:19:35.720
Valerie Taylor: So now, just to use one example, and let's say, with the X-ray science steering, so as we have to do the steering. The question that comes up is, what's the computational load in terms of the feature extraction that's important to actually look at the steering. What does that mean in terms of latency?

88
00:19:35.720 --> 00:19:43.609
Valerie Taylor: What does that mean in terms of compute if we're doing it close to the sensors? What does that mean in terms of power requirements.

89
00:19:43.610 --> 00:20:01.310
Valerie Taylor: How does that translate? Then, in addition to the compute, we're also having to look at, how does that translate in terms of the transfer of data between the different components. So that relates to our interconnect technology.

90
00:20:01.420 --> 00:20:24.939
Valerie Taylor: And so what does that mean? Also are the different algorithms that we can explore to help us with that feature extraction that's used to help us with the steering. So we recognize that this process of co-design of going between these different levels is iterative. And it's also a continuous process.

91
00:20:26.030 --> 00:20:44.800
Valerie Taylor: And so just to note we are leveraging our work that was done with thread work where with thread work which was a previous co-design project. It focused on high energy physics detectors, but it was with simple geometries

92
00:20:45.160 --> 00:21:02.760
Valerie Taylor: and the analysis focused on one D data. And so some of the outcomes we on the material side, there was research that resulted in a dual gated, mixed kernel heterojunction transistors that was used with Hep detectors.

93
00:21:02.770 --> 00:21:15.050
Valerie Taylor: We also had some initial development of system flow, and we'll hear about that in a moment to do more analysis that connects materials and Hep detectors.

94
00:21:15.050 --> 00:21:39.350
Valerie Taylor: Then we also developed some tools like spikelearn and Remsistor spikelearn to really explore spiking neural networks. But then, with the Memristar spikelearn it was going down to incorporate different materials that are used with neuromorphic devices. And so now that helped us to be ready for a more

95
00:21:39.350 --> 00:21:45.819
Valerie Taylor: complex vertical design and incorporating the goal of documenting that process

96
00:21:46.940 --> 00:22:08.690
Valerie Taylor: so just to note the impact, looking at different materials and new devices, also demonstrating the use of those devices and addressing the Hep and X-ray science detectors and looking at developing that end-to-end co-design methodology

97
00:22:08.910 --> 00:22:12.740
Valerie Taylor: and also working within the Meerkat center.

98
00:22:13.500 --> 00:22:24.339
Valerie Taylor: So with that, I'm going to turn it over to Salman to talk about the applications thrust, and just want to quickly see if there were any initial questions.

99
00:22:29.410 --> 00:22:37.770
Valerie Taylor: Okay, then we'll continue with salmon. All right, thanks, Valerie, so I'll just sort of give a more

100
00:22:37.930 --> 00:23:01.059
Valerie Taylor: the perspective from the application side, because a lot of the flow down requirements come from the application. And then, of course, there's a loop. So the applications can also change, depending on what comes back from below. So I think it's important. So we have a team here. Nantron from Fermilab is a high energy. Physics lead and tejas. Kuruswami from Aps is the Xrs lead on this, so can you move on to the next slide.

101
00:23:01.810 --> 00:23:22.039
Valerie Taylor: So the applications for detectors are very diverse, as you can imagine. So that's because the detectors are extremely diverse. So if you go to the large Hadron Collider, or generally Hadron colliders in principle, where what's happening is, you are basically colliding 2 beam punches in a high energy physics beam and.

102
00:23:27.270 --> 00:23:30.569
Paul McIntyre: Could you please mute your yourself if you're not presenting.

103
00:23:32.700 --> 00:23:38.429
Valerie Taylor: So so in the case of a high energy physics, Collider, basically, the the detectors are

104
00:23:41.870 --> 00:23:42.320
Valerie Taylor: pages.

105
00:23:44.350 --> 00:23:47.019
Valerie Taylor: Oh, your microphone is on. Okay.

106
00:23:47.610 --> 00:23:48.200
Paul McIntyre: Muted.

107
00:23:48.200 --> 00:23:49.750
Valerie Taylor: Okay, thank you.

108
00:23:49.880 --> 00:24:11.399
Valerie Taylor: So in the case of a high energy physics detector, when you're working with colliders, you basically have a detector that covers the interaction region. And what happens is that in a Hadron Collider it's a very, very messy interaction, and every bunch crossing you can have something like 50 to a few 100 separate proton proton collision. So that gives, you.

109
00:24:12.690 --> 00:24:34.140
Valerie Taylor: you know, extreme particle densities, radiation doses and large raw data rates. So you basically cannot actually look at the entire data set you have to do triggering and so forth. The other kinds of colliders that we have in high energy physics are. Of course, you know, electron colliders where you now have point particles that are in the collision. And so that's a lot cleaner. And the detector design process is completely different.

110
00:24:34.210 --> 00:24:46.700
Valerie Taylor: Another thing that's of interest in the future are Muon colliders. So Muon colliders are also Lepton colliders. So they have the same in principle, clean interaction. That is what happens with electrons. However, muons are

111
00:24:46.900 --> 00:24:59.739
Valerie Taylor: not stable particles in the muon ring, they decay. So what happens is, you also have what's called a beam induced background or bib, which can be extremely problematic in the detector. So you have to take care of that. So

112
00:25:00.216 --> 00:25:13.010
Valerie Taylor: and the Xrs. The X-ray detectors are obviously X-ray detectors, so they don't have. They don't necessarily cover the interaction region. Obviously, but they are more like cameras. And but the next generation. Detectors

113
00:25:13.010 --> 00:25:31.259
Valerie Taylor: have extremely high throughputs, and the throughput rates are looking very similar to current generation. High energy physics detectors. So all of these detectors have these very strict requirements, and I don't have time to go into detail. The other thing that's mentioned on this slide is simply how much power it takes to run these facilities which

114
00:25:31.260 --> 00:25:41.269
Valerie Taylor: obviously most of the power goes into the accelerator. But nonetheless it gives you an idea of how large things are on the high energy physics side, and also from the light sources.

115
00:25:41.960 --> 00:25:56.120
Valerie Taylor: So when you are designing something for these detectors, as I mentioned, they have commonalities, but they also have quite a bit of differences. So there's a very diverse sort of design parameter range that you have to consider next slide next.

116
00:25:56.790 --> 00:25:57.840
Valerie Taylor: So

117
00:25:58.170 --> 00:26:13.590
Valerie Taylor: so, for example. Now, if you look at the current power requirements or Hep directory electronics, there's about 2 megawatts for Atlas and Cms at Lhc. And if you look at this figure on the right. That sort of gives you an idea of where things are

118
00:26:13.810 --> 00:26:42.769
Valerie Taylor: for the beer applications at the level of the sensors. What the data rates are is on the Y-axis. What the computation time requirements are is on the X-axis, and we're headed into the left future corner, and which are like really, really extreme data rates, as you can imagine. So this means that you can't have obviously a simple approach for handling this. It has to be hierarchical in the sense that you have very close decision making at the edge and then followed by multiple layers of data, analysis and rejection.

119
00:26:42.800 --> 00:26:56.910
Valerie Taylor: So the other important point is that if you have all of this data, motion and all of this compute, and something like a 3 orders of magnitude, improvement in the throughput that is almost linearly adding to the power cost which

120
00:26:57.080 --> 00:27:07.049
Valerie Taylor: obviously it's unrealistic. You cannot have that just for the detectors, but it does mean that you need a new co-design approach to handle some of these things. Now, for

121
00:27:07.360 --> 00:27:10.360
Valerie Taylor: the other important point is that for Xrs

122
00:27:10.520 --> 00:27:31.359
Valerie Taylor: as distinct from Hep Hep, the directors are essentially static, meaning that you have lots of collisions. But you're not changing the detector on the fly. You're not actually changing a sample positioning or anything like that which you might do with a light source. So there's that other difference there that the closed cycle time scale for interaction with an Xrs detector are much shorter.

123
00:27:31.510 --> 00:27:34.570
Valerie Taylor: So even though the individual

124
00:27:35.270 --> 00:27:41.399
Valerie Taylor: time analyses are maybe easier. The fact that you have to react in real time makes it harder.

125
00:27:41.650 --> 00:27:51.139
Valerie Taylor: So so we have to take into account all of these numerical issues and technical issues in in designing the co-design approach. So next slide.

126
00:27:53.060 --> 00:28:12.100
Valerie Taylor: So the detector structure is very, very crudely speaking, is you have a sample in the case of an Xrs detector, and in the case of a collider is actually a collision. Right the beam collision. And then that is read off by a detector. And then they're often computing extremely close detector, which is what we're calling edge computing.

127
00:28:12.210 --> 00:28:26.659
Valerie Taylor: Then that communicates to online computing, which is also very close. And then data is further reduced. Maybe you know, various kinds of choices are made, and then it goes to offline which could be large forums which could be.

128
00:28:26.770 --> 00:28:55.530
Valerie Taylor: you know, so online and offline computing could be some combination of Gpu and CPU, or it could be specialized architectures, asics, etc. And those things are also going to happen at the edges. So this is where this is kind of the design space that you have to kind of figure out and optimize the detector, as I mentioned, have to be composite because of the data rates being so high and they come in different levels. So level one trigger is Asics and Fpgas and I just put some power numbers here just to give people an idea.

129
00:28:55.530 --> 00:28:59.249
Valerie Taylor: The rejection factor is simply how much of the data that you are rejecting

130
00:28:59.480 --> 00:29:01.050
Valerie Taylor: at the level of the trigger.

131
00:29:01.190 --> 00:29:10.749
Valerie Taylor: Then high level triggers follow level. One triggers, the rejection factors there are even higher. And once again, these numbers are notional. Just to give you an idea of what they are.

132
00:29:10.910 --> 00:29:23.579
Valerie Taylor: There are different kinds of readout electronics again, implemented in different ways. And so there's a lot of design space here to optimize. Basically, that's the important thing to keep in mind.

133
00:29:23.580 --> 00:29:26.509
Paul McIntyre: There's there's a question by Chris Gores.

134
00:29:28.870 --> 00:29:30.600
Grzegorz_Deptuch: Yes. Can you hear me?

135
00:29:30.970 --> 00:29:31.600
Paul McIntyre: Yes.

136
00:29:31.600 --> 00:29:38.189
Grzegorz_Deptuch: Okay? So I have questions looking at this slide, the most important question is

137
00:29:38.480 --> 00:29:43.730
Grzegorz_Deptuch: in high energy physics detectors, when you have a detector plane

138
00:29:44.180 --> 00:29:48.879
Grzegorz_Deptuch: somewhere embedded in the whole detector system. That is, you know, gigantic.

139
00:29:49.080 --> 00:29:57.939
Grzegorz_Deptuch: The problem of such detector plane is that it doesn't have any information of the entire topology of events.

140
00:29:58.460 --> 00:30:08.099
Grzegorz_Deptuch: So edge computing in such applied at such low level, you know, cannot be used effectively for generating high level.

141
00:30:08.500 --> 00:30:14.120
Valerie Taylor: Well, the edge computing is not designed to do anything like that. It's essentially a rejection system that you're using very quickly.

142
00:30:14.120 --> 00:30:30.710
Grzegorz_Deptuch: Detection system, but based on what? Because, you know, if you have just few hits in the plane of the detector that you are showing, you know, just receiving the signals. You know these hits are sparse in into the plane points, and the only.

143
00:30:30.710 --> 00:30:45.649
Valerie Taylor: Well, this is not a 2D. Sorry. This is a very confusing picture, because high energy physics detectors are not 2D. Right. I mean, they're extremely complicated. So if you look at for something like something like Atlas, or something like Cms, they, you know, they have calorimetry. They have.

144
00:30:45.650 --> 00:30:55.240
Grzegorz_Deptuch: Yes, of course I I think I'm quite aware of that. So this edge computing sits outside of Detector Block like.

145
00:30:55.240 --> 00:30:56.770
Valerie Taylor: No, it's inside the detector.

146
00:30:57.000 --> 00:30:59.200
Grzegorz_Deptuch: Okay? So you are, you are back to. My question

147
00:30:59.880 --> 00:31:05.249
Grzegorz_Deptuch: is inside a detector. You don't have context to do any high level processing. How would you.

148
00:31:05.250 --> 00:31:20.929
chris kenney: So so in a calendar, for instance, could you tell? There's a jet from a single plane or not that it's necessarily a plane even. But so for tracking, for texting, you're probably right. But the parts of the detector system, it's not clear that you couldn't provide useful information.

149
00:31:21.260 --> 00:31:24.749
Grzegorz_Deptuch: It's still, you still need to collect data from certain

150
00:31:24.940 --> 00:31:39.359
Grzegorz_Deptuch: volume, certain area. This, this is what I mean. I'm asking this educate effectively myself, because many, many of us are proposing similar kind of things things and approaches to computing.

151
00:31:39.500 --> 00:31:56.209
Grzegorz_Deptuch: And then at least myself, I thought that in order to perform meaningful computation. You still need to have a context and information about the jet. You know it can. It doesn't come from one segment of the detector. It has to be.

152
00:31:56.210 --> 00:31:58.280
Valerie Taylor: No, it doesn't come from one segment. No, I.

153
00:31:58.280 --> 00:32:03.390
Grzegorz_Deptuch: How would you? How would you? How would you answer? Such, you know, challenge that you have to have a context.

154
00:32:05.350 --> 00:32:17.909
Valerie Taylor: So I mean, I kind of agree with the fact that you need some sort of context. But the point also is that you want to bring computing as close as possible. So other, because otherwise it's a matter of like how much data you're going to push to further down the line

155
00:32:18.100 --> 00:32:27.620
Valerie Taylor: where you are not. So so that makes the detector even more complicated. So the question is as much computing as possible. You want to put at the front end

156
00:32:28.310 --> 00:32:29.510
Valerie Taylor: of the director.

157
00:32:29.680 --> 00:32:33.380
Valerie Taylor: Exactly how you're gonna do. It is a design decision, like I said, I mean.

158
00:32:34.030 --> 00:32:34.440
Grzegorz_Deptuch: So like

159
00:32:34.440 --> 00:32:54.230
Grzegorz_Deptuch: this computing next to the detector as much as possible of it. Because if you don't have object that you can compute on because you have just sparse heats and or maybe jet information. Maybe you have tracklets. You have very limited ability of computing higher level information.

160
00:32:54.230 --> 00:32:58.600
Valerie Taylor: So I'm not. You know. The expert on this, I guess, is none here, or is he not?

161
00:32:58.900 --> 00:33:01.029
Valerie Taylor: But I'm I'm happy to take this question.

162
00:33:01.030 --> 00:33:02.520
Valerie Taylor: John is not. He's not.

163
00:33:03.290 --> 00:33:03.720
Valerie Taylor: You don't.

164
00:33:04.055 --> 00:33:06.740
Grzegorz_Deptuch: Don't take this question as only to your.

165
00:33:06.740 --> 00:33:09.089
Valerie Taylor: No, no, I mean, it's a good question. So fundamentally.

166
00:33:09.090 --> 00:33:14.870
Grzegorz_Deptuch: Question that I'm asking myself, and you know any answer question would be.

167
00:33:14.870 --> 00:33:31.949
Valerie Taylor: No, it is. It is a good question. I don't disagree. I mean this because a lot of the data is going to be very sparse. And so the question fundamentally is, how much compute is actually there. You know a context, as you say, how much information is there to compute on. So some of these things are basically just cuts that you're going to have to make on sparse.

168
00:33:32.560 --> 00:33:38.649
Paul McIntyre: Perhaps we could move to Angelo. I think he, you may have a comment or a question to add to this.

169
00:33:39.052 --> 00:33:44.289
Angelo Dragone (SLAC): Think this is maybe a comment for- for- for as well as a

170
00:33:45.860 --> 00:33:54.870
Angelo Dragone (SLAC): So I think I think the the way you should see, that is that when you talk about the computing element at the very sense of it, is not that it does the full computation.

171
00:33:54.870 --> 00:33:55.810
Valerie Taylor: No, no, of course no.

172
00:33:55.810 --> 00:34:07.189
Angelo Dragone (SLAC): To work together with the other computing elements that you have in a network. So the question is how you distribute a computer across various elements. And of course, there might have to be

173
00:34:07.190 --> 00:34:30.280
Angelo Dragone (SLAC): multiple layer within this, the detector that do the actually, the communication across the multiple elements at the deeper level that do the computation. So it's a problem while you distributed computing. And now you do the fast interlayer communication within the architecture. This is very similar to some aspects that we have in the Arrays proposal

174
00:34:30.280 --> 00:34:51.530
Angelo Dragone (SLAC): in particular, from what I'm seeing so far, there could be a lot of potential synergies between the and arrays in particular interest to the studies, the network of the detector of the way, all the various elements are interconnected and also transform which you are part, Gregor, which is the communication across layers.

175
00:34:51.770 --> 00:34:56.769
Valerie Taylor: Yeah, no, I think that's very important. I mean, I mentioned here about the Hlt to offline storage and other things, but

176
00:34:56.920 --> 00:35:03.629
Valerie Taylor: the and we have a separate group of people thinking about this, the communication and networking in the project, as well.

177
00:35:03.820 --> 00:35:06.779
Paul McIntyre: There's a question or a comment by Alexander.

178
00:35:07.070 --> 00:35:09.440
Alexander Paramonov: Yeah, it's it's a comment to answer the question.

179
00:35:11.130 --> 00:35:16.520
Alexander Paramonov: So the the purpose of this edge computing is to reduce the data volume

180
00:35:17.063 --> 00:35:41.749
Alexander Paramonov: for example, it's critical for Aps applications, since they just can't read out the the sensor because of the very large data volume. It's also critical for collider physics, since the cabling is a major limitation. So if you can provide some higher level information or reduce backgrounds noise this can allow you a better detector.

181
00:35:43.450 --> 00:35:44.130
Paul McIntyre: Thank you.

182
00:35:44.130 --> 00:35:44.710
Paul McIntyre: Thank you.

183
00:35:45.100 --> 00:35:59.069
Valerie Taylor: Okay, yeah. So I think this conversation is kind of the next slide is simply just an example of, you know, the kind of scenario with a something like a current cms upgrade. So these are just the numbers. And and that I was talking about

184
00:35:59.400 --> 00:36:04.210
Valerie Taylor: over here. So that gives you an idea about the latencies and about yeah

185
00:36:04.510 --> 00:36:17.839
Valerie Taylor: feeds and throughputs. And what needs to be done. So the point is, of course, this is the current upgrade. So it's not like you can do too much with the current upgrade that's going on. But these are the kinds of numbers that you are dealing with in this thing.

186
00:36:18.810 --> 00:36:34.189
Valerie Taylor: So you can see that the range of computing is all the way from like I was saying, closer detector, and then far all the way to the worldwide computing grid. And so so there's abilities to optimize lots of things everywhere in this chain.

187
00:36:35.380 --> 00:36:38.565
Valerie Taylor: Okay, so

188
00:36:39.410 --> 00:36:54.219
Valerie Taylor: so basically, what we're doing right now is from the application side, is looking at designs for next generation detectors for the Fcc. Both Ee and Hh. That's a far future kind of scenario, and also for a potential muon collider.

189
00:36:54.370 --> 00:37:20.260
Valerie Taylor: And and actually, this discussion we already had about the communication channels between the detector and the processing stages and the fact that the initial collision event sizes are expected to grow quite a bit. There's also a question of motivation for optical data links and improved data reduction algorithms in this chain. So that's part of this thought process. The other question basically is, if you're going to put edge processing.

190
00:37:20.400 --> 00:37:46.290
Valerie Taylor: then there is also a question of how much power you can. Probably you can devote to that. And what kind of radiation tolerance that will have. So these are the kinds of questions that will affect the material science. For example, if you're going to put something like this there for the X-ray detectors, I know Thejos is here, but next generation detectors, target data, rates of petabytes per second. So this is megapixel detection and gigahertz readouts

191
00:37:46.530 --> 00:38:12.720
Valerie Taylor: and the raw data of the current hardware needs to be improved by about 5 orders of magnitude in order to get there. The other thing that we mentioned also is this need for closed loop control requirements which for some of these things, because of those shots taking place very rapidly, you have to go down to sub microsecond latency, and currently that latency is something of the order of seconds. That's my understanding, so that will have to be improved considerably.

192
00:38:13.240 --> 00:38:16.969
Valerie Taylor: so that that also gives you an idea of how many

193
00:38:17.170 --> 00:38:19.480
Valerie Taylor: orders of magnitude we are dealing with here.

194
00:38:19.660 --> 00:38:21.410
Valerie Taylor: Okay, next.

195
00:38:22.640 --> 00:38:50.200
Valerie Taylor: So it's important to have a simulator to evaluate all the composable, both the digital and the analog components for the Hep and Xrs application, and have a, you know, unified hardware architecture for composing these components build out the underlying software frameworks to generalize beyond the current applications, because, like in the high energy physics situation, we have a long lead time in order to think about these things. Not so for the

196
00:38:50.240 --> 00:38:52.660
Valerie Taylor: X-ray X-ray scenario.

197
00:38:52.950 --> 00:38:59.549
Valerie Taylor: There's also needs to be some sort of methodology for doing like system wide optimization, not just locally optimizing things.

198
00:38:59.570 --> 00:39:28.890
Valerie Taylor: So the application stream objectives are basically to develop a set of benchmarks and data sets on which you can try out all these optimization techniques. And where you can say, all right. These are what our requirements are. And you know, how would different levels of computing and communication address these requirements for these 2 kinds of detectors? So that would be the set, the requirements, benchmark and data sets they would hand over to the simulation and architecture teams, and also at a lower level to the materials and devices folks.

199
00:39:28.910 --> 00:39:30.839
Valerie Taylor: so they know exactly what

200
00:39:30.880 --> 00:39:45.030
Valerie Taylor: what the situation is, what it is for radiation, damage requirements. What is it that we want for latency and other things, so that these metrics can be converted into sort of requirements for each module that you're putting in into the stack.

201
00:39:47.858 --> 00:40:04.080
Valerie Taylor: So this I think I'll skip through because Andrew cover this. This is the other scenarios. So basically, at this moment the application team is developing a set of benchmarks and data sets for evaluating new technologies and providing feedback into their development cycle.

202
00:40:04.530 --> 00:40:15.110
Valerie Taylor: And here are a couple of examples about the future needs and data reduction for energy physics, for example, the Muons I mentioned already the decay, background, rejection, the event, triggering.

203
00:40:15.350 --> 00:40:24.519
Valerie Taylor: gallometry and tracking, and for X-ray sources. It's the control and positioning of samples, for example, and some of these samples are sensitive, you know.

204
00:40:25.090 --> 00:40:43.090
Valerie Taylor: Destroy them. And then there's fast processing of complex data, such as typographic images. And the other interesting question is, you know, how do you actually change the beam focus and steering, using actual models for the for the control systems and the accelerators.

205
00:40:43.260 --> 00:40:44.360
Valerie Taylor: So

206
00:40:45.190 --> 00:40:51.329
Valerie Taylor: and again, the time resolved data has to be at the rep rate of the light source. If you're going to do these controls.

207
00:40:51.440 --> 00:40:56.799
Valerie Taylor: So that is something that this whole real time thing is a brand new kind of situation for all the light sources.

208
00:40:59.230 --> 00:41:04.470
Valerie Taylor: Alright, thank you, Salman, and let's see. And Andrew. It's Andrew on the call.

209
00:41:06.280 --> 00:41:07.680
Valerie Taylor: The danger, wasn't

210
00:41:11.470 --> 00:41:15.639
Valerie Taylor: I? I let me see, is was Andrew able to join.

211
00:41:17.620 --> 00:41:19.599
Mark Hersam: I don't see him here, Valerie.

212
00:41:19.600 --> 00:41:28.658
Valerie Taylor: Oh, okay, well, let me go ahead and go through about the architecture software and a simulation for the energy, efficient

213
00:41:29.580 --> 00:41:36.770
Valerie Taylor: experiments. And so here, that's where, with respect to the vision.

214
00:41:37.070 --> 00:42:05.559
Valerie Taylor: it's looking at the combination of the Hep detector. And that's the data reduction. The X-ray science, both in terms of the data reduction, but also in terms of the real time feedback. And so that's where you're seeing at the top here, where there could be different type of domain, specific accelerators.

215
00:42:05.790 --> 00:42:22.669
Valerie Taylor: and so depending upon what type of accelerators that are available. And that depends upon the materials, but also depends upon the application needs. It has different implications in terms of the software infrastructure of how it's being mapped.

216
00:42:23.174 --> 00:42:36.790
Valerie Taylor: As well as looking at that hardware infrastructure. So it's exploring different type of domain specific accelerators and looking at the aspects in terms of the capabilities.

217
00:42:36.790 --> 00:42:55.560
Valerie Taylor: but also the customized experiments. And so here, that's where we look at the architecture and the hardware, we're looking at the speed we're looking at what the capability is or the functionality. But we're also looking at the ability to scale

218
00:42:55.920 --> 00:43:13.610
Valerie Taylor: in the software, we're looking at a combination. 2 of the integration how it's doing the optimization. But also, as this area talked about, there's the simulation so that you can explore different type of scenarios. And what ifs

219
00:43:13.920 --> 00:43:29.410
Valerie Taylor: then going a little bit more into the optimization? That's where you're looking at the design. You're also looking at the operation time, and also the infrastructure flexibility in terms of your different components that you have.

220
00:43:30.220 --> 00:43:56.779
Valerie Taylor: So here, there's 2 canonical architectures that's being explored. And so one can architecture that's being explored. And that's where at the top, you're looking at the data, the initial data reduction. But you can have as multiple data reductions depending upon what your target is for ruling out

221
00:43:57.110 --> 00:44:02.140
Valerie Taylor: different components or during different rejections

222
00:44:02.440 --> 00:44:13.379
Valerie Taylor: then. And that's with the Hep and the X-ray science. But then there's the part where you're in the microseconds here of looking at that steering.

223
00:44:13.740 --> 00:44:18.989
Valerie Taylor: and that's where that becomes important in terms of having that

224
00:44:19.260 --> 00:44:26.209
Valerie Taylor: feedback to do that steering, and how much analysis is is needed to be done.

225
00:44:26.870 --> 00:44:41.719
Valerie Taylor: So we want to be able to look at these 2 canonical architectures so that we can explore different type of domain specific architectures to be used with each one.

226
00:44:43.680 --> 00:45:01.830
Valerie Taylor: So here, that's where we're also, for example, if we go a little bit more into detail, there's work that's been done at the University of Chicago that has to do with what's called an up-down system.

227
00:45:01.950 --> 00:45:28.489
Valerie Taylor: And so up down system is really tailored to doing a streaming of data where you can do the analysis with the up down system. And so it allows for scalable bandwidth and also in terms of the components you're using state of the art components in terms of the data transformation. So

228
00:45:29.270 --> 00:45:40.930
Valerie Taylor: this is an example of where we're looking at some domain, specific architectures where up down has really been utilized

229
00:45:40.930 --> 00:46:02.520
Valerie Taylor: and design for doing that streaming of data. And it can help, for example, with the streaming and also the latency that we may have in terms of those needs to have that feedback where it can do the rapid computation. So there are some unique aspects, too, with respect to the Up down

230
00:46:02.520 --> 00:46:05.470
Valerie Taylor: architecture and the up-down system

231
00:46:05.470 --> 00:46:14.849
Valerie Taylor: that's being explored with respect to the Bia Project, and this leveraging from the work that's been done at the University of Chicago.

232
00:46:16.310 --> 00:46:18.040
Valerie Taylor: And so here.

233
00:46:18.040 --> 00:46:19.070
Mark Hersam: Valerie.

234
00:46:19.070 --> 00:46:19.730
Valerie Taylor: Yes.

235
00:46:19.730 --> 00:46:22.410
Mark Hersam: It looks like there's a question from Sadas Shankar.

236
00:46:22.630 --> 00:46:23.340
Valerie Taylor: Okay.

237
00:46:24.250 --> 00:46:29.859
Sadas Shankar: Yeah. Audrey, can you go to the previous slide? Please?

238
00:46:30.070 --> 00:46:30.840
Valerie Taylor: Yes.

239
00:46:31.260 --> 00:46:33.000
Sadas Shankar: The one before this.

240
00:46:35.020 --> 00:46:44.350
Sadas Shankar: So what are the quantitative metrics you are planning to use

241
00:46:44.670 --> 00:46:49.060
Sadas Shankar: here? Is it just the data rate and latency.

242
00:46:49.820 --> 00:46:54.120
Sadas Shankar: Because I don't see anything about energy here. So I'm wondering.

243
00:46:55.600 --> 00:47:05.270
Sadas Shankar: do you use them as a proxy for energy, or do you have any intent to measure them systematically or work with somebody else.

244
00:47:05.460 --> 00:47:33.710
Valerie Taylor: So that's a really good question. Thank you. So we do intend to include, for example, there's the power requirements. There's also the size requirements. And so we're going to mention that, too, when we get to a component that's here that's called system flow. But those are components that we are considering with respect to the architecture.

245
00:47:33.970 --> 00:47:35.799
Sadas Shankar: Okay. Okay. Thank you.

246
00:47:35.800 --> 00:47:38.189
Valerie Taylor: Yes. So that's a good point. Thank you.

247
00:47:38.940 --> 00:47:48.199
Valerie Taylor: But this one we mentioned the data rates and latency, because that differentiates the 2 type of vertical designs

248
00:47:48.490 --> 00:47:57.200
Valerie Taylor: with respect to the steering. But you're right. We are considering size and power and other factors as well, because you can also

249
00:47:57.350 --> 00:48:01.000
Valerie Taylor: consider the memory usage.

250
00:48:03.300 --> 00:48:04.560
Valerie Taylor: So thank you.

251
00:48:04.750 --> 00:48:06.870
Valerie Taylor: Are there other questions, Paul?

252
00:48:09.200 --> 00:48:11.150
Paul McIntyre: I don't see any others right now.

253
00:48:11.150 --> 00:48:14.230
Valerie Taylor: Okay, okay?

254
00:48:14.370 --> 00:48:15.716
Valerie Taylor: So then,

255
00:48:16.740 --> 00:48:33.330
Valerie Taylor: when we start to look at the software components. So in addition to the architecture, we're also looking at the system software. And that is how are we mapping the algorithms that we're using with respect to

256
00:48:33.460 --> 00:48:38.849
Valerie Taylor: the applications or the detectors? How is that being mapped to

257
00:48:38.900 --> 00:49:04.770
Valerie Taylor: the different components that we have especially taking into account that we can have some domain specific components. So we're looking at things that have to do with the memory model that's being used? How is data transferred between the components? Do we have a shared memory model? We're also looking at the execution model.

258
00:49:05.431 --> 00:49:14.259
Valerie Taylor: In terms of what functionality is there in terms of the different components? Then there's also

259
00:49:14.450 --> 00:49:29.170
Valerie Taylor: the idea about security, isolation of devices and failures that we also need to consider, and that comes up a great deal when we look at scaling up with respect to the different components.

260
00:49:31.590 --> 00:49:48.900
Valerie Taylor: So here's an example. And this is system flow. So system flow was a component that we had with thread work. But now we're extending it extensively with respect to the Bia Project.

261
00:49:49.050 --> 00:50:13.439
Valerie Taylor: to better represent the connection between the application, the architecture, and also with respect to the materials. And so that's where you can start to explore, for example, what happens between different systems, where you may want to explore system A where? Maybe you're using the Fbga and CPU

262
00:50:13.770 --> 00:50:22.950
Valerie Taylor: system? B, again, a. What if what if you're using accelerator and Gpu, okay? And that's where.

263
00:50:23.120 --> 00:50:37.379
Valerie Taylor: as was brought up, we are exploring power. We're exploring effectiveness in terms of those different components. And it may be size, it may be memory. It may be time.

264
00:50:37.460 --> 00:50:58.780
Valerie Taylor: So we're using our system flow to define that interface and represent those relationships between the different levels. And those levels include the application, the architecture as well as the system software, then also materials and devices.

265
00:50:58.930 --> 00:51:09.365
Valerie Taylor: And so the architecture I want to go back. It includes the different components as well as the interconnect. And so that's really important.

266
00:51:10.010 --> 00:51:36.709
Valerie Taylor: so we, we've been doing some work. And, for example we looked at when we looked at the Iv characteristics for MOS 2, a device we looked at power, consumption and accuracy, you know, in terms of high energy physics type of algorithms that were done with respect to thread work. So

267
00:51:36.880 --> 00:51:42.935
Valerie Taylor: just to give an example of a question that you may ask around that

268
00:51:44.040 --> 00:51:46.420
Valerie Taylor: to with system, flow.

269
00:51:46.910 --> 00:51:49.740
Paul McIntyre: There is a question. Sadas has another question.

270
00:51:50.000 --> 00:51:52.890
Sadas Shankar: Yeah, so what? What is effectiveness? Valerie.

271
00:51:53.070 --> 00:52:03.259
Valerie Taylor: So by effectiveness, meaning in terms of you have power. But we can have other different metrics. It may be time.

272
00:52:03.850 --> 00:52:14.899
Valerie Taylor: Were you effective in in achieving the requirements. For, let's say, the steering which has to do with time. But it could be where you also have a space

273
00:52:15.760 --> 00:52:22.170
Valerie Taylor: requirement, so effectiveness in terms of meeting the application requirements.

274
00:52:22.570 --> 00:52:31.280
Sadas Shankar: Is programmability also part of effectiveness. Because you are having an Fpga basic on the other side.

275
00:52:31.280 --> 00:52:32.220
Valerie Taylor: Right.

276
00:52:32.410 --> 00:52:35.750
Sadas Shankar: There's an effort there as well, isn't it?

277
00:52:36.090 --> 00:52:36.940
Valerie Taylor: Pardon me.

278
00:52:37.110 --> 00:52:42.990
Sadas Shankar: There should be a bigger effort on the Fpga side in terms of programmability.

279
00:52:43.460 --> 00:53:02.009
Valerie Taylor: Right. And so the Fpga was used as an example. But it is with Fpga. You do have to worry about programmability. Fpgas are really good to giving you rapid prototyping. But with Fpgas you also have to worry about the time. Okay.

280
00:53:02.010 --> 00:53:03.260
Sadas Shankar: Okay. Thanks.

281
00:53:03.260 --> 00:53:12.679
Valerie Taylor: But that's where the effectiveness is really to look at meeting the requirements with the applications.

282
00:53:16.940 --> 00:53:37.280
Valerie Taylor: And so there's also, for example, with system flow. You can also look at different type of technology descriptors. And that's where you're going down to really get into more details at the materials level.

283
00:53:37.340 --> 00:53:56.070
Valerie Taylor: And so that's where, for example, you can go down and look at communication attributes where you're looking at the photonics. Oh, I didn't realize the time. Okay, let me go quickly, because I want to give Mark a chance. I thought I was at 3 40 and looked up. I'm at 3 50.

284
00:53:56.500 --> 00:54:13.269
Valerie Taylor: So with system flow. That's where you can look at hardware software. Also, runtime type of optimization. I will skip the detail here and just note that we can do cross area guidance and do a lot of exploration.

285
00:54:13.340 --> 00:54:20.660
Valerie Taylor: So I'm going to quickly move to Mark. So sorry about that sorry, mark. I just recognized the time.

286
00:54:21.440 --> 00:54:30.810
Mark Hersam: No problem. I'll try to be quick. So my name is Mark Herson from Northwestern University. I'm speaking on behalf of the materials and devices thrust next slide, please.

287
00:54:32.060 --> 00:54:44.480
Mark Hersam: So we have 2 major objectives. So the 1st is to look at understanding, prototyping and integrating nanomaterials-based neuromorphic devices for energy efficient data processing hardware

288
00:54:44.620 --> 00:54:47.520
Mark Hersam: for high energy physics and X-ray science detectors.

289
00:54:47.990 --> 00:54:57.790
Mark Hersam: And secondly, to look at the fundamental research aimed at energy efficient optical interconnects. Again, for these 2 detector technologies next slide.

290
00:54:59.960 --> 00:55:11.559
Mark Hersam: The normal for computing paradigm is one where we want to circumvent the famous Von Neumann bottleneck. The fact that we need to move data back and forth between the memory block and the processor block.

291
00:55:12.090 --> 00:55:19.970
Mark Hersam: By having in-memory computing, you can avoid that, or at least minimize the energy consumption coming from that Von Neumann bottleneck.

292
00:55:20.630 --> 00:55:28.419
Mark Hersam: And to do that, we need to realize fast, dense analog, and importantly, tunable, non-volatile memory next slide.

293
00:55:31.070 --> 00:55:38.459
Mark Hersam: Towards that end we're going to be exploring those goals, using low, dimensional nanoelectronic materials. These materials possess

294
00:55:38.570 --> 00:55:47.479
Mark Hersam: high mobilities relatively defect-free interfaces and atomic scale thicknesses that ultimately, enable scaling

295
00:55:48.010 --> 00:55:59.249
Mark Hersam: additionally atomic scale thicknesses imply weak screening, and that allows a high degree of tunability of almost any electronic device with respect to a gate.

296
00:55:59.470 --> 00:56:04.910
Mark Hersam: And this could include devices like memristers that historically have not been gate tunable.

297
00:56:05.710 --> 00:56:11.699
Mark Hersam: Additionally, we're going to be looking at optical responses to enable integration with some of the optical interconnect

298
00:56:11.850 --> 00:56:14.669
Mark Hersam: goals of this overall thrust.

299
00:56:15.380 --> 00:56:21.570
Mark Hersam: Additionally, loading materials provide mechanical flexibility and back end of line integration

300
00:56:21.780 --> 00:56:26.679
Mark Hersam: which will further facilitate edge sensing applications. Next slide.

301
00:56:29.340 --> 00:56:36.350
Mark Hersam: If we look at the materials themselves, we're going to be trying to understand fundamental relationships and ultimately mechanisms.

302
00:56:37.016 --> 00:56:41.710
Mark Hersam: Between low dimensional material properties and neuromorphic function in 3 categories.

303
00:56:41.850 --> 00:56:48.939
Mark Hersam: first, st the mechanism and role of defects in a device called a memtransistor, which is a hybrid memrister transistor.

304
00:56:49.670 --> 00:56:57.310
Mark Hersam: secondly, interfacial effects in van der Waals heterojunctions, and finally correlated electrical and thermal effects

305
00:56:57.460 --> 00:57:00.680
Mark Hersam: in printed nanomaterial films which will enable

306
00:57:00.870 --> 00:57:04.610
Mark Hersam: of biorealistic spiking responses. Next slide

307
00:57:06.770 --> 00:57:17.120
Mark Hersam: on the device side, we're looking more at applications, such as exploring the the scaling limits and different gating schemes in the context of mem transistors

308
00:57:17.290 --> 00:57:29.660
Mark Hersam: to integrate non-volatile gating and signal amplification in Van der Waals heterojunctions and to develop biorealistic spiking neurons and oscillators using printed 2D memoristive films

309
00:57:29.880 --> 00:57:30.859
Mark Hersam: next slide.

310
00:57:33.330 --> 00:57:37.030
Mark Hersam: The optimal and interconnects is the other major objective of this thrust

311
00:57:37.330 --> 00:57:47.029
Mark Hersam: and the underlying motivation for this is that current materials limit the speed and efficiency of optical interconnects, particularly for electric optic modulators.

312
00:57:47.490 --> 00:57:59.669
Mark Hersam: For example, if we looked at state of the art silicon optical modulators which rely upon free carrier dispersion effects. This is limited to approximately 100 Gigahertz, based upon the carrier mobility of silicon.

313
00:58:00.550 --> 00:58:13.930
Mark Hersam: In contrast, if we went to materials like graphene with higher charge, carrier mobility in principle, they could go to faster speeds. But the problem is that monolayer graphene is relatively weakly interacting with light.

314
00:58:14.360 --> 00:58:21.529
Mark Hersam: And as a result, this thrust is going to be looking at bulk semimetals that are 3 dimensional analogs to graphene.

315
00:58:21.670 --> 00:58:26.770
Mark Hersam: A bismuth is among the most promising candidates due to its low, effective mass and high mobility.

316
00:58:27.020 --> 00:58:35.809
Mark Hersam: This enables ultra-fast gate tunable electro absorption effects in a bulk material, and consequently it also has high optical absorption.

317
00:58:36.000 --> 00:58:36.990
Mark Hersam: Next slide.

318
00:58:38.920 --> 00:58:49.140
Mark Hersam: Some early results from this thrust include the demonstration of gate tunability of semi-metallic and relatively thick bismuth films.

319
00:58:49.540 --> 00:58:56.379
Mark Hersam: This requires high capacitive coupling to the tune of one to 10 microfarads per square centimeter.

320
00:58:56.710 --> 00:59:05.160
Mark Hersam: This can be realized using a material developed within the team, namely, hexagonal boron, nitride ionogels, which are also compatible with

321
00:59:05.310 --> 00:59:10.209
Mark Hersam: variable temperature conditions. On these devices often be measured at low temperature.

322
00:59:10.420 --> 00:59:11.389
Mark Hersam: Next slide.

323
00:59:13.570 --> 00:59:24.990
Mark Hersam: There's also going to be effort in this thrust to look at modulators based upon the pockols effect, IE. Electric field, induced changes in the refractive index in materials like lithium niobate.

324
00:59:25.750 --> 00:59:31.160
Mark Hersam: particularly integrating lithium nibate with photonic crystal microcavities will reduce device footprints.

325
00:59:31.430 --> 00:59:37.209
Mark Hersam: and thereby enable high speed and lower switching energies next slide.

326
00:59:39.330 --> 00:59:48.569
Mark Hersam: and then, to conclude, I'll give you some deep dive. Examples of ongoing research, beginning 1st with Memtransistors, next slide.

327
00:59:50.050 --> 00:59:56.300
Mark Hersam: So the Memtransistor, as Valerie alluded to before was a device we were looking at in the previous threadwork project.

328
00:59:56.670 --> 01:00:05.630
Mark Hersam: One of the limitations of that device is that there's a lack of independent control over the transistor action and the Memrister action

329
01:00:06.220 --> 01:00:14.930
Mark Hersam: in the new Bia Project. We're now looking at Split Gate mem transistors where we have a different gate under the body of the channel versus underneath the contacts.

330
01:00:15.210 --> 01:00:20.319
Mark Hersam: This allows us to independently modulate the memristor and transistor responses.

331
01:00:20.520 --> 01:00:26.690
Mark Hersam: and this is quite useful in the context of more efficient reinforced learning.

332
01:00:26.830 --> 01:00:27.780
Mark Hersam: Next slide.

333
01:00:29.420 --> 01:00:43.129
Mark Hersam: We're also looking at so-called self-aligned mem transistors. These are devices which allow very short channel length and the ability to have asymmetry between so-called drain, gated and source gated modes.

334
01:00:43.390 --> 01:00:51.719
Mark Hersam: The 2 curves here show you the differences which are not only quantitative but qualitative in nature. For example, in terms of current saturation.

335
01:00:52.230 --> 01:01:04.699
Mark Hersam: this ability to switch between drain and source gated modes is particularly useful for efficient novelty detection, which is certainly of high interest in the detector context. Next slide.

336
01:01:06.680 --> 01:01:16.760
Mark Hersam: We're also looking at another family of devices. The so-called mixed kernel Heterojunction transistor which Valerie mentioned was also initially explored in the Threadwork project.

337
01:01:16.930 --> 01:01:22.180
Mark Hersam: This is a device that allows you to tune just by different applied biases

338
01:01:22.320 --> 01:01:27.750
Mark Hersam: between a sigmoidal response and a Gaussian response or a mixture of the 2

339
01:01:27.890 --> 01:01:31.749
Mark Hersam: which is of high interest for support vector machine classification.

340
01:01:31.920 --> 01:01:32.910
Mark Hersam: Next slide.

341
01:01:34.500 --> 01:01:50.720
Mark Hersam: Indeed, we've been applying this mixed kernel support vector machine classification to high energy physics detectors. This enables efficient data filtering, as was alluded to earlier in the presentation by Solomon, which is useful for high energy physics detectors.

342
01:01:50.940 --> 01:01:56.400
Mark Hersam: We're now extending this methodology also to the X-ray science detectors next slide.

343
01:01:58.310 --> 01:02:11.259
Mark Hersam: And then, finally, because we have the ability to tune the Gaussian response in these mexurnal heterojunctions. If we can integrate that with non-volatile gate memory which we're doing using materials like ferroelectrics.

344
01:02:11.560 --> 01:02:23.500
Mark Hersam: This will allow us to store that in a crossbar array, thereby allowing hardware, efficient implementation of Bayesian neural networks which will enable probabilistic predictions.

345
01:02:24.360 --> 01:02:30.890
Mark Hersam: And with that I believe it's my last slide. So I apologize for being a couple minutes over, but I'll be happy to take any questions.

346
01:02:33.890 --> 01:02:36.239
Valerie Taylor: And thank you, Mark, and sorry to give

347
01:02:36.470 --> 01:02:38.920
Valerie Taylor: so little time, so thank you for that.

348
01:02:43.040 --> 01:02:47.920
Paul McIntyre: There's a question in the chat from Maya.

349
01:02:50.320 --> 01:02:52.730
Paul McIntyre: Let's see.

350
01:02:54.110 --> 01:02:54.980
Paul McIntyre: Okay.

351
01:02:54.980 --> 01:02:56.910
Mark Hersam: Yeah, this, the question is, is there.

352
01:02:56.910 --> 01:02:59.179
Paul McIntyre: About system. Flow. Yeah, for Github.

353
01:02:59.180 --> 01:03:00.860
Mark Hersam: For system, flow. Valerie.

354
01:03:01.170 --> 01:03:10.210
Valerie Taylor: Yes. So we're working on making that available so it should be available soon in terms of system flow.

355
01:03:11.290 --> 01:03:12.060
Valerie Taylor: Yes.

356
01:03:15.790 --> 01:03:17.830
Paul McIntyre: Well, we're a little bit over,

357
01:03:18.280 --> 01:03:42.510
Paul McIntyre: and I think I and others have to have to jump onto other calls. Thanks so much, Valerie, Mark, Solomon for sharing, and and your whole team, I'm sure, collaborated in the slide, the slide making and and preparing an excellent presentation overview. I think it was very stimulating, and

358
01:03:42.570 --> 01:04:01.649
Paul McIntyre: look forward to having opportunities to work together on aspects of of what via is doing, and also having an opportunity to see what some of the other teams are doing in future. iterations of this meeting. So thanks. Everybody

359
01:04:02.010 --> 01:04:02.870
Paul McIntyre: have a great day.

360
01:04:02.870 --> 01:04:05.780
Valerie Taylor: And thanks Mark and Solomon. Yes.

361
01:04:06.040 --> 01:04:06.690
Paul McIntyre: Cheers.

