Very beautiful research and thorough documentation.
I initially wanted to comment that this looks a lot like time-domain reflectometry on a conceptual level - but as Cindy Harnett seems to be your advisor, you probably know that already :)
I did some work on TDR analysis when analyzing CATV fraud detection but it turns out that we generally know where on the fiber plant each CPE is and collecting the DOCS IS timing data from those devices is essentially free.
This is really neat—thanks for sharing! I love the general idea of making materials more "self-aware" or inspectable. It's very sci fi!
The research I did before my current job touched ever so slightly on this too, so even cooler to see it on the front page. What we were doing was using complex valued neural nets to learn the transmission matrix of an optical fibre. It was previously done in the optics community by propagating Maxwell's equations, but we were able to beat the state of the art by a few orders of magnitude with a very simple architecture (the actual physics just boils down to a single complex matrix multiplication!). The connection to your work here is that if the fibre is bent you have to relearn a new matrix. It could even be possible to learn some parameterized characterisation of the fibre, so you could say do some input/output measurements and use that to model a spline of the fibre. We did not get that far though!
Interesting! I'm not sure to be honest. I imagine in practice it's hard to get a pure twist without any bending, and if the system can't detect any difference with a twist then the need to detect it is also nullified as it is effectively the same system.
This is fantastic! One can easily imagine some minor refinements that would allow this to be mass producible with very high accuracy. And the applications are abundant. I imagine you could use space filling curves to make 2D or 3D sensors that could cost-efficiently give robots a sense of touch. Wrapped around something like a flexible tube you could make it directionally sensitive for proprioception. It's easily possible that other things that affect the air gaps like say temperature differences could be detected and localized as well.
There are a lot of cool applications indeed! I was able to use it to do gait for a soft robot "leg", but I have to wait for the paper to be published later this year before going into too much detail.
What an excellent write-up. Very clearly explained, and the use of gifs and visuals to illustrate concepts was spot-on. I'm only a software engineer with a limited understanding of this field, but really enjoyed reading this and learned a lot. Well done and congrats on the PhD.
Not quite the same thing, but this reminds me of DAS using fiber optic cable for various acoustic sensing tasks--basically as an alternative to geophones/hydrophones. There have been a number of papers using transoceanic fibers for various monitoring tasks.
That is also used for various industrial applications, e.g. for strain sensing by Luna Innovations. I know that Schlumberger has various patents on fiber-optic sensing relating to towed streamers (e.g. for marine seismic acquisition.) But I haven't seen it used for soft robotics before.
Yeah initially doing literature review was a bit daunting because of all this existing work, especially FBG-type sensors, but this idea is so fundamentally simple that its been mostly bypassed by the smarter minds
Your sensor data seems to have quite large "dead zones" - those should be trivially fixable by reducing the inter-sensor distance, right?
Would it be useful to sense the direction of the bend? I reckon this might be possible by dividing the tube like a Mercedes logo, and having three sets of the sensor in one outer tube.
Is there a way to sense multiple bends? With the current setup that'd result in invalid readings as you're essentially OR-ing the value. Are there any good solutions for this?
Great ideas! Even though I haven't implemented it fully it is possible to sense multiple bends because each bend will always have the same relative attenuation (across the strands) so it would just be a matter of matching on the relative deltas from one reading to another. The catch, though, is at that at some point no light will reach the end if every joint bends a lot. There are ways to mitigate that, but my comment is too long already!
Could you increase power output through the fibre in response to a loss of signal, or one that falls below a given threshold? Using reasonable bounds to account for errors where attenuation isn’t the cause of no signal being received.
I've been looking for a sensor that can accurately detect a golf club sweeping over the top of it (how close, how quickly, arc of swing).
The idea being to create a golf launch monitor that doesn't require hitting a ball, so you can play sim golf inside. Think playing alongside the Masters as you watch on TV in the lounge - without smashing a golf ball through your TV.
I am wondering if this could be suitable (or a number of them ganged together).
The risk of hitting the sensor and seeing it flying across the room into the TV seems too risky.
As mentioned, a high FPS camera along with the Kinect tech to extract a skeleton would work so much better. You could make that in your garage using a PlayStation Eye and existing open source tech.
Yep, the sensor would need to be buried/encased in a mat. The club needs to make contact with the mat realistically (i.e. like taking a divot on a real grass surface).
Most launch monitors today use high cameras or doppler radar. Neither of them really works without a ball. The difference between a good shot and a crap shot is a few millimetres. The technology can measure a bit of data from the club alone (club speed, angle of attack) but that's not enough to accurately extrapolate actual ball flight, since it's all about the quality of the contact of club on ball before ground.
The opportunity is for something that can accurately measure your golf swing without a ball.
Optishot is not that:
> Since no ball data is recorded, the OptiShot 2 cannot accurately read how your club impacts the ball, so if you top the ball or even miss entirely, the OptiShot 2 will show that you made solid contact.
Some kind of electromagnetic sensor does seem like a way to go. Not sure of the practicalities of magnetising/ keeping magnetised a golf club head though.
And ideally it would work with all of your clubs or at least all your metal clubs, not just one special club.
They put light down a tube and then measured the light to trigger a key press. That's why bending your fingers/hand did anything. The revised the mechanism in the later generations.
I don't understand what happens when the sensor is bent in more than one location.
At the beginning you mention a ToF sensor, which made me think that you're looking at reflections from the bends and measuring distance to them, but this seems not to be the case. ISTM that if you bend the sensor in two places, you'll simply get the sum of the logattenuations from both. If we assume that the "strength" of the bend continuously changes attenuation, ISTM that you need as many strands as there are gap locations to be able to disambiguate between any two sets of bends.
Am I misreading something or is this intended to operate in cases where we know only one bend is present?
In the paragraph "Visualization of the OptiGap Sensor System" looks like the gap pattern from multiple sensors is providing a unique signature that can be translated into the exact location on the length of the sensors. The mechanism for translating the wave forms to actual location seems to be based on a bayesian model, according to the "Realtime Machine Learning on a Microcontroller" paragraph.
Have you explored fiber bragg grating. 10 years ago a student of mine semi successfully explored sensing the shape of firefighter hose using that technique. Seems to gain new traction again lately for optical shape sensing.
I explored FBG sensors early on and they are very cool -- I was aiming for a less expensive and more robotics-oriented application. Something that can be seamlessly integrated into a design at a lower level without the complexity of FBG technology.
I think you made the right choice. You've come up with a great sensor topology. FBGs are good, but there's so much to them that is just infuriating. I ended up late in my optics work spending most of my time finding ways to avoid them.
The perpetual issue with FBGs is cost, in my experience. For quality sensors, you can use cheap fibre with expensive detectors, or cheap detectors with expensive fibre. There's been a perpetual promise of the tech getting cheaper, but never seems to really drop significantly. We always struggled to find buyers once they saw the price tag.
Reading about them, the main reason is because you have to manufacture regularly repeating slices of glass in a column over large distances whose thicknesses are usually on the order of the wavelength you're dealing with. So visible light requires some really small slices.
I could probably make fiber bragg gratings in my ceramics studio, yet it would require buying some Very specific thickness glass sheet, setting up a bunch of hole punch jigs to get 1 m worth out of 1 um slices, figuring out some way to actually layer 1,000,000 slices correctly, and then not just melting them to slag accidentally.
The ones we purchased were made in two different ways. Production ones were made by using a diffraction pattern of UV light to modify the refractive index of the core slightly, the other was using femptosecond lasers to alter it more dramatically.
The UV light method worked fine, came out to about $600US per fibre with 32 gratings each, not exactly cheap. The measurement unit was on the order of $20k US though.
That kind of surprises me, since then I'd figure that somebody would be making the equivalent of a reprap that puts out glass filament like this 3D glass printer [1] or various glass resin methods ([2] and [3]) Just with some kind of inline laser processing, or post deposition modification. Most of those use UV or lasers anyways. The methods mentioned don't seem like those should cost $20,000/unit unless the unit's 1km+.
It seems to me that the refractive index plays a role: couldn't you increase resolution by replacing air by another medium every other cut?
Say air, water, air, etc.
My reasoning is that you'd increase the resolution without adding too much technical complexity.
My maths is too rusty to evaluate how it would mess with the gray code though.
I didn't include this in my article but I did some experiments early on (for a different idea) with air bubbles in oil inside a Teflon coated tube but that presented a lot of challenges (mainly the bubble breaking up) that made it not ideal for something like this.
This can certainly be miniaturized with the right manufacturing techniques but I left that for the future.
As sometime about to start a PhD in theoretical physics, how did you do 3 years? I've been told a doctorate is 2 years of classes followed by 3 to 5 years of research. Did you already have a masters that you're doctorate college accepted to override class requirements?
The short answer? Weekly meetings with my advisor! Long answer: I also had 2 years of classes but I started working on my research immediately, while taking classes. By the time I finished all my classes and became a candidate I had one paper already published and another one accepted, so I was able to get a 3rd paper out and defend by the end of the 3rd year.
With all do respect, I think you're telling a half-truth here. Your LinkedIn profile shows you had two MS degrees before starting your PhD. Undoubtedly you got some of the requirements waived for the PhD course requirements. That means your total time post BS was 5 years. I did the same thing - 1.5 years (3 semesters) MS and then came back later (different school) to do a PhD and got half the course requirements waived. Finished the PhD in 4 years but total time post BS was a more humble 5.5.
This is awesome! Presumably you can make this work with any interface that doesn't enforce the total internal reflectivity of a fiber optic cable, and therefore allows light to leak out. Instead of an air gap, have you tried experimenting with removing the cladding of the fiber optic cable, but keeping the core intact?
Alteratively, could you use a short segment of colored cladding that allows certain wavelengths to leak out more than others? I think that would allow you to encode each bend point as a different color-- which might require a different (more expensive) rx sensor, but could be useful for certain applications.
I did experiment with various ways of allowing light to escape but nothing came close to the properties of a total air gap. You can actually measure (relative) bend angle with it like a protractor since the attenuation is very linear!
There is already existing work that uses colored segments for something similar but those techniques are hard to do outside a well equipped lab.
This is interesting! Although the way you have to have log2 fibers, and a different encoding in each junction, presents quite a manufacturing challenge. Oh well, it's not a problem at the research/POC phase!
I wonder if by using a large nozzle, you could print out the entire sensor by laying out lengths of TPU with flexible joints at each air gap. It would depend on how well light traveled through the printed part though.
3D printing does affect the light passing through significantly. There are a number of options for fabricating these but most of the successful ones involve cutting (can even use a laser cutter).
Paul, what an amazing project, this is what hacking is all about. Congratulations!
Definitely try to explore the commercial side of your invention.
It wouldn't hurt to talk to an IP lawyer, if you're still in Uni they usually have people there doing this and you can just go talk to them, free of charge (for you!).
I'm generally against the idea of patents, mainly because of people who came to know to game the system and exploit it (patent trolls etc...) Your project is a real thing with real applications, you definitely deserve a share of whatever commercial benefit this could bring to the world, :D.
My (former) school is actually already in the process of doing that! My dissertation committee thought it was novel enough that it needed some IP protection and encouraged me to pursue that.
Maybe I'm not understanding the blog post so bear with me. Isn't what he is described what a time domain reflectometer does? [1]. I mean that's what it's used to detect breaks or kinks in fiber optics cables. The same tech is used to detect problems in civil infrastructure with embbeded fiber optics.
Oh yeah cable companies have long been able to do that - I wasn't trying to compete with or replace that technology. My work was soft robotics-focused with simplicity in mind.
I wasn't commenting on the novelty aspect, I was more wondering in which situations the author's device might be better. Also, the author noted they started with a time of flight sensor, which would have made it extremely similar to OTDR
That's super cool and I hope you don't mind a little bit of unsolicited feedback but the first question everyone's asking is "what does it do?" At present the blog post starts out with two paragraphs talking about the format of the blog post and the applications but not what the sensor actually measures.
That's basically every other HN article for me. Zero context. "Blorglorp 2024.4.99 released" "With the new version, Blorglorp finally sheds its libgnipgnop dependency and increases efficiency by 1.25%". Bam, top post for the day, lots of multi-paragraph comments, and I'll still never know what it's even for.
The context tends to come some time later, when they close store. "We at Derplabs are proud that we dared to make an opinionated jpeg viewer, Blorglorp, that only interpreted the four first bits of every byte and ignored the rest. Commonly referred to as 'the naughty bits' by image-viewing connoisseurs. Unfortunately the market was not ready, but we are sure our ideas will gain traction in the future. Our deepest gratitude to our customers and investors that were excited to join our journey."
You think? All the closing announcements I've seen were "we've reached the end of our incredible journey, we're proud to have served our users but your data is gone tomorrow. Good luck!".
Meanwhile, I never find out what the thing even does.
lol v3.6.7845 doesn’t even self calibrate. Most people are on v3.6.8002a except Mac users and people using a venv. Works perfectly under newer plan9 emulators. I just use Albus mode in Emacs (trunk) and avoid a lot of those problems. If you forget to use trunk the mouse is disabled for some reason…
> This means the sensor can tell you where you bent it, with a predefined (and coarse) resolution.
> OptiGap’s application is mainly within the realm of soft robotics, which typically involves compliant (or ‘squishy’) systems, where the use of traditional sensors is often not practical.
This explanation is already quite clear. If I understood correctly, by "predefined resolution" you mean that it detects which silicone sleeve was bent on a tube with a series of them, correct?
Can you provide more concrete examples for how you envision it being used? The first application that comes to mind is sensing how fingers bend in a glove controller.
The finger bending example is certainly a classic for something like this but I think it truly shines in soft robot examples like flapping wing robots or swimming finned robot, where it's critical for sensors to be mechanically transparent so as to not impact the usually delicate dynamics. The "soft" robotic arm in my earlier paper is another good example https://ieeexplore.ieee.org/document/9763962
Having just read the piece for the first time after you added the "bent rope" explanation: YOU NAILED IT. I literally had the reaction of thinking, "Wow, there's a super simple explanation early on! I trust this writer much more now."
The first time I saw one of these in person I was in awe. You could take a normal looking cable (think bicycle cable sleeve) and bend it and see in real time the same shape on the display.
This question surprises me. Bend location == locate where something is bent. Then some video's of the researcher bending a tube. Is there any confusion possible?
There are a lot of similarities in the approach to the linked paper (which is a very cool concept) and I saw a lot of similar concepts in my lit review. At a high level, my sensor targets bend localization with simple fabrication techniques while the linked paper is doing more general camera-based gesture recognition. I have a more thorough comparison to existing work in my actual dissertation.
Awesome work. Would be cool to make a really long one and tape it all along the links of an industrial robot arm and see if you can train a neural net to predict the location of the end effector (which you would already know precisely from the angles of the linkages) from the readings of your sensor.
if you know how OTDR works, location is known by high speed modulation and high speed sampling components, which means high costs, if you want to achieve higher resolution. Usually the laser pusle will be in the level of nano seconds.
what's being introduced in the article is using multiple fibers with coded location info. therefore no need to have sophisticated OTDRA-like equipment to get the location info.
not sure if my understanding is right, but good job: from a simple idea to sophisticated design, optimization and even commercialization.congrats.
This has inspired about a hundred ideas in my head. Thank you!
(They're all ways which on paper seem like I might be able to do things better, but if the sort where 95% don't pan out in practice. Everything from machine learning algorithms to identify shapes to fancy encoding with spectra of what reflects along the path to other ways to engineer changes in the path)
I used it more for future-proofing in case I wanted to do sensor fusion or something like that later on -- currently it's just 1D filtering so I could have used anything. Also I'm just way more familiar with using Kalman filters so it was also a comfort thing!
This is cool as hell! I hope your PhD continues well, and that this invention serves you in the future.
Do you put any lube on the interface between the silicone sleeve and optical cable? I imagine the bending action will cause displacement, and friction there could rub and/or cause the nominal position to shift around.
Since the sleeve is a stretchy rubber as long as the inner diameter of it is a bit smaller than the outer diameter of the fiber it holds just fine. For more dynamic applications, though, a silicone adhesive, or even super glue for more permanent strands, helps!
Fixed -- thanks! And yes exactly! I had access to basically any piece of equipment I could want (including a cleanroom that can create ICs) but then basically no one would be able to recreate what I would make.
Could you get a similar effect by cutting dimples or notches in patterns (like a helical line of small conical holes) along the whole length in a sleeve instead of using separate pieces and infer overall curve shape? Or do the segments need to be cut completely through?
The catch is any notch you make will weaken the material significantly and you'll have fatigue failures. That's the sneaky part of using a flexible sleeve, you don't introduce any undesired weaknesses.
How does it work if there is more than one bend? Is it able to localize both bends? And what about the bend directionality? Anyway nice project. I could see it being used for a mesh to give robots a tactile skin covering.
It doesn't! I heavily used TPU to drive home the point that it can work with almost any light-transmitting fiber. I used PMMA optical fiber for the more fine demos.
I would like to add more color to this. 3 year PhD is very possible for a motivated individual at, what I'll call, the low end of R1 universities. That doesn't mean you cannot do good research (the OP is a counterexample) but that there is a fundamental difference between the program that the OP went to and top-tier universities. Think Harvard, Berkeley, Stanford, etc.
It is normally pretty easy to distinguish these programs because they focus a lot on the course requirements and that the thesis counts as a course. From the OP's institution, you can see that the course load is at least 15 courses and I would not be surprised if some students do 20 [1]. These programs are more or less an advanced undergraduate with a real independent research project that spans multiple years. Conversely, top-tier universities typically operate under the "publish 3 things and you have satisfied the thesis requirements". This cannot be explicitly written and this is normally difficult to ascertain online. For example, Harvard has similar requirements [2] but you can still find it for some departments [3]. The Catch-22 with this is that someone who can publish 3 things in 3 years can publish 6 things (or more) in 5 years which will greatly increase their academic job prospects. Thus, at top-tier universities, even the best students stay for 5 years at a minimum to start working the job market. You need to be at Dantzig's level to finish at a top-tier in 3 years [4]. To summarize, if you want to finish a PhD in 3 years, look for course-heavy programs and don't expect to get hired into academia.
Edit: I see the OP commented somewhere else that they published 3 papers. The OP is obviously a standout but I think most people have to be realistic that very few areas of research allow for publications in your first year PhD. For example, if you are doing research in LLMs, you are looking at a couple of years just to be brought up to speed.
In my experience in computer engineering in the academic systems of the USA, New Zealand, and Australia, a very large proportion of students will write their first paper in their first year. It is field dependent, but when I was a postdoc at a top US R1, 100% of the students I interacted with had their first paper in their first year. These even included students working on LLMs :-)
Also, top non-US universities often graduate their engineering students within 3-4 years of commencing, with 3-4 papers being a very common international expectation as well.
If you want to finish a PhD in 3 years and are interested in academia, in addition to following the path the OP has laid out you may also look to good international universities and then get your next 3-4 papers as a postdoc.
I appreciate adding data for the crowd but "a very large proportion of students will write their first paper in their first year" is simply not true when talking about the whole population. My department, albeit not CS but stats/ML, the first year is dedicated to doing courses and preparing for the qualifying exams. Some students would publish a paper. A few more might be coauthors with an upper year student (read, very little involvement). Pretty close to half would not even have an advisor until the summer. I studied at, supposedly to US News, a top 10 in the US for CS.
Typically, non-US universities have 3 year undergrads and 2 year masters prior to PhD. End-to-end, you are looking at the same time. There are, of course, exceptions. UK I think shaves off a year by integrating undergrad and masters.
Hiding the years or a PhD by doing an extended postdoc is barely the point of the exercise. The median time for a CS PhD in the US is 7 years [1]. Subtract 2 years for good students but add a year postdoc and I think you have a realistic 5-6 years from start of PhD to first academic position for the top decile.
[1] https://dl.acm.org/doi/10.1145/2047196.2047264
The research I did before my current job touched ever so slightly on this too, so even cooler to see it on the front page. What we were doing was using complex valued neural nets to learn the transmission matrix of an optical fibre. It was previously done in the optics community by propagating Maxwell's equations, but we were able to beat the state of the art by a few orders of magnitude with a very simple architecture (the actual physics just boils down to a single complex matrix multiplication!). The connection to your work here is that if the fibre is bent you have to relearn a new matrix. It could even be possible to learn some parameterized characterisation of the fibre, so you could say do some input/output measurements and use that to model a spline of the fibre. We did not get that far though!
Here are the papers if you're interested:
CS-focussed one: https://papers.nips.cc/paper_files/paper/2018/hash/148510031...
Physics-focussed one: https://www.nature.com/articles/s41467-019-10057-8
Disclaimer: I know nothing about this field but have spent a lot of time in a dark lab surrounded by various types of optical fiber.
And there you have it! The difference between a miserable experience and a good one
That is also used for various industrial applications, e.g. for strain sensing by Luna Innovations. I know that Schlumberger has various patents on fiber-optic sensing relating to towed streamers (e.g. for marine seismic acquisition.) But I haven't seen it used for soft robotics before.
Your sensor data seems to have quite large "dead zones" - those should be trivially fixable by reducing the inter-sensor distance, right?
Would it be useful to sense the direction of the bend? I reckon this might be possible by dividing the tube like a Mercedes logo, and having three sets of the sensor in one outer tube.
Is there a way to sense multiple bends? With the current setup that'd result in invalid readings as you're essentially OR-ing the value. Are there any good solutions for this?
(also, nice work!)
The idea being to create a golf launch monitor that doesn't require hitting a ball, so you can play sim golf inside. Think playing alongside the Masters as you watch on TV in the lounge - without smashing a golf ball through your TV.
I am wondering if this could be suitable (or a number of them ganged together).
As mentioned, a high FPS camera along with the Kinect tech to extract a skeleton would work so much better. You could make that in your garage using a PlayStation Eye and existing open source tech.
Most launch monitors today use high cameras or doppler radar. Neither of them really works without a ball. The difference between a good shot and a crap shot is a few millimetres. The technology can measure a bit of data from the club alone (club speed, angle of attack) but that's not enough to accurately extrapolate actual ball flight, since it's all about the quality of the contact of club on ball before ground.
Optishot is not that:
> Since no ball data is recorded, the OptiShot 2 cannot accurately read how your club impacts the ball, so if you top the ball or even miss entirely, the OptiShot 2 will show that you made solid contact.
And ideally it would work with all of your clubs or at least all your metal clubs, not just one special club.
They put light down a tube and then measured the light to trigger a key press. That's why bending your fingers/hand did anything. The revised the mechanism in the later generations.
At the beginning you mention a ToF sensor, which made me think that you're looking at reflections from the bends and measuring distance to them, but this seems not to be the case. ISTM that if you bend the sensor in two places, you'll simply get the sum of the logattenuations from both. If we assume that the "strength" of the bend continuously changes attenuation, ISTM that you need as many strands as there are gap locations to be able to disambiguate between any two sets of bends.
Am I misreading something or is this intended to operate in cases where we know only one bend is present?
I could probably make fiber bragg gratings in my ceramics studio, yet it would require buying some Very specific thickness glass sheet, setting up a bunch of hole punch jigs to get 1 m worth out of 1 um slices, figuring out some way to actually layer 1,000,000 slices correctly, and then not just melting them to slag accidentally.
The UV light method worked fine, came out to about $600US per fibre with 32 gratings each, not exactly cheap. The measurement unit was on the order of $20k US though.
[1] https://www.nobula3d.com/
[2] https://ethz.ch/en/news-and-events/eth-news/news/2019/11/gla...
[3] https://www.glassomer.com/products/glass-3d-printing.html
My reasoning is that you'd increase the resolution without adding too much technical complexity.
My maths is too rusty to evaluate how it would mess with the gray code though.
Very nice idea
This can certainly be miniaturized with the right manufacturing techniques but I left that for the future.
Alteratively, could you use a short segment of colored cladding that allows certain wavelengths to leak out more than others? I think that would allow you to encode each bend point as a different color-- which might require a different (more expensive) rx sensor, but could be useful for certain applications.
There is already existing work that uses colored segments for something similar but those techniques are hard to do outside a well equipped lab.
Definitely try to explore the commercial side of your invention.
It wouldn't hurt to talk to an IP lawyer, if you're still in Uni they usually have people there doing this and you can just go talk to them, free of charge (for you!).
I'm generally against the idea of patents, mainly because of people who came to know to game the system and exploit it (patent trolls etc...) Your project is a real thing with real applications, you definitely deserve a share of whatever commercial benefit this could bring to the world, :D.
Best of luck with everything!
[1]: https://en.wikipedia.org/wiki/Optical_time-domain_reflectome...
Meanwhile, I never find out what the thing even does.
> OptiGap’s application is mainly within the realm of soft robotics, which typically involves compliant (or ‘squishy’) systems, where the use of traditional sensors is often not practical.
This explanation is already quite clear. If I understood correctly, by "predefined resolution" you mean that it detects which silicone sleeve was bent on a tube with a series of them, correct?
Can you provide more concrete examples for how you envision it being used? The first application that comes to mind is sensing how fingers bend in a glove controller.
Relevant patent: https://patents.google.com/patent/US20240044638A1/
The first time I saw one of these in person I was in awe. You could take a normal looking cable (think bicycle cable sleeve) and bend it and see in real time the same shape on the display.
Well done completing the Phd!
Our lab has done a good bit of work around elastomers similar to the linked paper, such as multitouch pressure sensing (https://ieeexplore.ieee.org/abstract/document/9674750). The authors of your linked paper can actually achieve what they've done with a single light source by using one of these! The zones are key (https://www.st.com/en/imaging-and-photonics-solutions/time-o...)
https://en.wikipedia.org/wiki/Power_Glove
https://en.m.wikipedia.org/wiki/Distributed_temperature_sens...
And this dear worker drones, is why schedule destroys quality :-)
Loving the write up. Clear and simple. Good luck with squishy robots. :-)
Since you mentioned the visualization part, let me comment on my pet pieve:
The rainbow / jet color map should not be used for anything. It distorts your data and is not accessible for people with color vision deficiencies.
If you want to know more, have a look here: https://matplotlib.org/stable/users/explain/colors/colormaps... and this talk here: https://youtu.be/xAoljeRJ3lU
if you know how OTDR works, location is known by high speed modulation and high speed sampling components, which means high costs, if you want to achieve higher resolution. Usually the laser pusle will be in the level of nano seconds.
what's being introduced in the article is using multiple fibers with coded location info. therefore no need to have sophisticated OTDRA-like equipment to get the location info.
not sure if my understanding is right, but good job: from a simple idea to sophisticated design, optimization and even commercialization.congrats.
(They're all ways which on paper seem like I might be able to do things better, but if the sort where 95% don't pan out in practice. Everything from machine learning algorithms to identify shapes to fancy encoding with spectra of what reflects along the path to other ways to engineer changes in the path)
In the realm of rescuing or assisting people unreachable otherwise, could this bend and flex as much as those highly compliant robots?
Do you put any lube on the interface between the silicone sleeve and optical cable? I imagine the bending action will cause displacement, and friction there could rub and/or cause the nominal position to shift around.
00702, 4th photo caption appears to be missing the word gap. :)
The catch is any notch you make will weaken the material significantly and you'll have fatigue failures. That's the sneaky part of using a flexible sleeve, you don't introduce any undesired weaknesses.
I would like to add more color to this. 3 year PhD is very possible for a motivated individual at, what I'll call, the low end of R1 universities. That doesn't mean you cannot do good research (the OP is a counterexample) but that there is a fundamental difference between the program that the OP went to and top-tier universities. Think Harvard, Berkeley, Stanford, etc.
It is normally pretty easy to distinguish these programs because they focus a lot on the course requirements and that the thesis counts as a course. From the OP's institution, you can see that the course load is at least 15 courses and I would not be surprised if some students do 20 [1]. These programs are more or less an advanced undergraduate with a real independent research project that spans multiple years. Conversely, top-tier universities typically operate under the "publish 3 things and you have satisfied the thesis requirements". This cannot be explicitly written and this is normally difficult to ascertain online. For example, Harvard has similar requirements [2] but you can still find it for some departments [3]. The Catch-22 with this is that someone who can publish 3 things in 3 years can publish 6 things (or more) in 5 years which will greatly increase their academic job prospects. Thus, at top-tier universities, even the best students stay for 5 years at a minimum to start working the job market. You need to be at Dantzig's level to finish at a top-tier in 3 years [4]. To summarize, if you want to finish a PhD in 3 years, look for course-heavy programs and don't expect to get hired into academia.
Edit: I see the OP commented somewhere else that they published 3 papers. The OP is obviously a standout but I think most people have to be realistic that very few areas of research allow for publications in your first year PhD. For example, if you are doing research in LLMs, you are looking at a couple of years just to be brought up to speed.
[1] - https://catalog.louisville.edu/graduate/programs-study/docto...
[2] - https://seas.harvard.edu/office-academic-programs/graduate-p...
[3] - https://www.math.harvard.edu/graduate/graduate-program-timel...
[4] - https://en.wikipedia.org/wiki/George_Dantzig
Also, top non-US universities often graduate their engineering students within 3-4 years of commencing, with 3-4 papers being a very common international expectation as well.
If you want to finish a PhD in 3 years and are interested in academia, in addition to following the path the OP has laid out you may also look to good international universities and then get your next 3-4 papers as a postdoc.
Typically, non-US universities have 3 year undergrads and 2 year masters prior to PhD. End-to-end, you are looking at the same time. There are, of course, exceptions. UK I think shaves off a year by integrating undergrad and masters.
Hiding the years or a PhD by doing an extended postdoc is barely the point of the exercise. The median time for a CS PhD in the US is 7 years [1]. Subtract 2 years for good students but add a year postdoc and I think you have a realistic 5-6 years from start of PhD to first academic position for the top decile.
[1] - https://ncses.nsf.gov/pubs/nsf22300/report/path-to-the-docto...