I've seen in the past a different trick that is adding an IMU[1] to the robot arm. When combining two different types of sensors, it's called Sensor Fusion[2], and it's really common to put together a IMU with GPS and slap a Kalman Filter[3] for very accurate position reading.
The particularly cool thing of this video though is that they could mount the new sensor within the motor itself, making it all a lot more compact.
If anyone wants to build this sort of thing the new Raspberry Pie Pico 2 is both orders of magnitude more capable than the chip used here and also around half the price.
It's by far the best value for money for an introductory 32bit ARM/Risk embedded device right now.
It’s relatively old at this point, but I’m still getting excellent performance from the Teensy 4.1. It’s a little more expensive, around $30, but runs Cortex M7 @ 600MHz and includes a generous compliment of I/O and protocols.
Well it's new, it'll take a while for support to stabilize. The Pico 1 took almost a year given that the whole concept of a PIO was new, but this one should get there far sooner.
You can’t judge backlash by how the robot repeats the exact same set of movements over and over. That removes hysteresis from the problem definitionally.
There are larger industrial robots that use secondary encoders to improve "out of the box" accuracy for more demanding tasks. The secondary joint feedback is paired with a kinematic model of the robot structure/mechanics to accurately predict where the robot tool point actually is.
It lets you see the position of the motor's shaft. That's used in some motor control algorithms, even if the motor's position isn't exactly the joint's position.
If I’m not mistaken, one encoder measures the position and force applied by the motor, while the other encoder measure the position of the slack of the business end of the robot.
How is that patent even a thing from 2009? Position feedback in robots is WAY older than that. I have textbooks at least a decade older than that patent describing that very system, so I wouldn't be surprised if it falls over at the first prior use claim it encounters.
How in the world can a company get a patent on math and basic techniques that have been around for decades before the patent was even filed? I can understand materials, unique “first come” algos, brand new mechanics, but there nothing novel in that patent. There’s nothing novel about having secondary (or tertiary or ….)feedback for a system
Legit question: if I replicate a patent for a personal non profit use, is this infringement? Perhaps it is because I'm benefiting from the intelectual property.
In the USA there's an exemption for research use of patents, specifically for "amusement, to satisfy idle curiosity, or for strictly philosophical inquiry." https://en.wikipedia.org/wiki/Research_exemption
!> Yes, replicating a patented invention, even for personal, non-profit use, is technically considered patent infringement. A patent grants the inventor the exclusive right to make, use, sell, and distribute the patented invention for a certain period (usually 20 years from the filing date).
What part of the patent in your opinion infringed by the youtube video?
Robot arms have existed long before 2015. And a lot of them use some combination of encoders. The term "secondary feedback" by itself without clarification doesn't really mean anything specific, and in terms used by the patent I would call this more of adding primary/primary feedback system.
The part that the patent seems to repeat is having secondary position sensor attached to the mechanical joint of robot (I assume as opposed to encoders already builtin into the servo drive), although patentability of even that seems somewhat questionable in 2015. I am not that good at reading patents, so maybe I am missing the actually relevant/novel part of that patent.
In the video both encoders are builtin the servo, instead of attached to the arm itself, even more the extra angle sensor introduced by author is attached directly to motor before the gearbox and it's slop which is complete opposite of what the patent tries to claim. The angle servo attached to output shaft after gearbox is what all hobby servos have.
If you go through the actual claims of patent most of them are not applicable to the video. 1) "system for large-scale assembly operations, ... secondary feedback mounted to joint ...". Not suitable for large scale asembly operations, no feedback attach to joint, both feedback systems are built in the hacked servos and can't measure any slop within the joint itself or servo to joint connection. 2) angular accuracy of 0.05 arcminutes - very unlikely 4) system of claim 1 wherein the manufacturing assembly is an aerospace assembly - no aerospace assembly making here, 5) 6rotary axis and 1 linear axis - no linear axis, 6) secondary feedback system is optical encoder -> questionable whether the optical angle sensor attached before gearbox matches the definition of "secondary encoder" as described by rest of the patent, also optical encoders is typically used for describing relative postion/angle sensor based on bunch of slits and counting pulses instead of analog amplitude measurement which gives absolute position. Typically I wouldn't bother with minute differences in classification of how the angle sensor is implemented, but since patent explicitly lists very specific sensor technologies I guess it matters. Otherwise they could just claim that there is an angle sensor/encoder. 7) secondary feedback system is inductive encoder - no inductive encoders here, 8) magnetic encoder - no magnetic encoders, 9) secondary feedback system is resolver - no resolver here (as in analog angle sensor based on ac coupling change depending on angle between two parts to directly generate the sin/cos of angle), 10) "system for acurate large scale-manufacturing assembly operations, ... >3 axis robot arm, with end tool, secondary feedback mounted on rotary joint" - this just more or less restates claim 1 only this time mentioning >=3 instead of >= 6 axis for some reason and mentions an end tool. Is ballpoint pen an end tool for large-scale manufacturing operations? Also the secondary feedback thing discussed before.
BLDC motors require electronic commutation. The motor controller must read the current angle of the motor so that it knows which phases U V W to enable via six MOSFETs.
An ESC can cheat by reading out the back EMF but this only works once the motor has started spinning, such as in a drone, but in a robot arm that is supposed to hold its position.
I sort of struggle to see how getting good positioning accuracy from a high backlash system under zero load can have a useful application.
Maybe just lack of imagination on my part.
There is this trend that says make and buy bad hardware, the software will solve it. I haven't noticed that paying off. Tesla using webcams for self driving is an example. Boeing designing their planes and then using faulty attitude sensors is another.
I would be way more impressed if the robot did something useful. My suspicion is that its real world application capabilities are rather limited.
You have oversimplified the Boeing one: their goal was to create an efficient plane to compete with Airbus without needing the expense and delays of a new type certification.
To do this they needed bigger engines on the same frame, which in turn needed to be mounted further forward affecting flight characteristics and requiring retraining. Retraining would be a sales killer so they hacked on some software systems to attempt to make the plane fly like an older 737.
Then they can just use an iPad training course for pilots to upgrade. The augmentation had to avoid the pilot knowing about (I think) the plane getting stuck in a stall at a too high AoA (this is where my memory might be off...) so the MCAS software uses AoA sensors to nose down based on the detected AoA.
The AoA sensors were never designed to be used for a direct life and death critical use case and sometimes they got stuck or failed. MCAS only used one as an input. If MCAS incorrectly asseses a nose down is required and the pilot follows their 737 training they are having their last day. That plane is going down.
Bascially people were murdered by Boeing so at every stage of this wretched plan they can make more money.
I think you are right but Boeing was more of perhaps the worst possible asshole design, and deserves it's own league.
> If MCAS incorrectly asseses a nose down is required and the pilot follows their 737 training they are having their last day. That plane is going down.
Boeing’s argument is that an MCAS trim runaway is able to be addressed by the (memory item) Trim Runaway checklist and the crew of ET302 correctly used the STAB TRIM CUTOUT on that checklist during their attempt to save the flight. They then undid that action, in order to manually command nose-up trim (also reasonable under the circumstances, though contrary to the checklist), then stopped commanding nose-up trim while leaving the trim runaway checklist item reverted, allowing MCAS to continue the trim runaway that they’d previously correctly stopped by following basic 737 training. Then the flight was lost.
Boeing did wrong here, but their argument was that if a 737 pilot correctly executed the emergency checklist that is drilled into them during initial type training and in recurrent training, they’d be able to overcome that emergency. That falls into at least the probably technically correct category to me.
(The yoke displacement method to disconnect the autopilot was not part of the emergency checklist for stab trim runaway.)
Arguably the problem is that Boeing absolutely and utterly failed to do what they set out to do. After all, if the MCAS failures would present like the usual 737 runaway stabilizer, then the certified pilots would have been able to handle it as such. Since the "runaway MCAS" was a completely new phenomenon (one factor being the absolutely idiotic "on for a few seconds and then off for some" cycle).
And as we know the FAA also was clueless, as they approved Boeing's "safety analysis".
>>> Extensive interviews with people involved with the program, and a review of proprietary documents, show how Boeing originally designed MCAS as a simple solution with a narrow scope, then altered it late in the plane’s development to expand its power and purpose. Still, a safety-analysis led by Boeing concluded there would be little risk in the event of an MCAS failure — in part because of an FAA-approved assumption that pilots would respond to an unexpected activation in a mere three seconds.
And, just to drive whatever point home, on top of all this the FAA completely dropped the ball, because it did not notice that they allowed Boeing to break their own base conditions which in effect invalidated the safety analysis.
>>> As Boeing and the FAA advanced the 737 MAX toward production, they limited the scrutiny and testing of the MCAS design. Then they agreed not to inform pilots about MCAS in manuals, even though Boeing’s safety analysis expected pilots to be the primary backstop in the event the system went haywire.
It's understandable that Boeing wanted to avoid simulator training, but apparently this regulatory discontinuity (ie. either same or different, no in-between, as far as I understand) forced them to concentrate so much on avoiding the need for new type certification that they ended up completely believing their own crazy tale about the two models' sameness, which led to hiding information from pilots.
I think it may have been a contractual term where Boeing could avoid a $1M reduction in purchase price per aircraft (times 280 aircraft) if simulator training could be avoided for the launch customer, Southwest Airlines.
There’s some really negligent stuff, like changing how to disable auto pilot (ie, MCAS) — as the pilots of both crashed planes attempted actions that would have disabled the autopilot on previous models.
Wasn’t the Boeing issue completely preventable with an inconsequential extra part that cost nothing? Like the short cuts actually worked but they literally went all the way to almost succeeding and snatched defeat from the jaws of victory. (Aside from all the other things they did that also contributed to disaster situations going worse)
I don't know. Maybe an expert can chime in but I think it is a hard problem because of ice etc. I think the 737Max has the problem where AoA matters more because you can get into a stall you can't get out of.
Whereas maybe before on older planes you get in a stall and you nose down to reduce AoA. You don't need a sensor to know this look at altitude etc.
So now you need perfect ten nines of reliability AoA sensors. Their use case has gone from a data point to mission critical, but the sensor is the same.
You never want to get into a stall in a large commercial jet. Private pilots are taught stall and maybe spin recovery techniques for small GA aircraft. ATP rated pilots are taught stall/spin avoidance.
Chances are, if your AoA is anywhere near the critical AoA, a competent pilot is likely aware of it. The sensors are just another safety factor on top of that to help ensure situational awareness.
Or, in the case of the 737Max, to trigger a chain of events that proved lethal to hundreds to people. That’s the secondary use of the AOA sensor in combination with the FC software that they implemented. It would have been relatively easy to integrate the AOA input with other sensors to eliminate this problem, but it would have invited a deeper look at the hazards of their design decisions.
> Bean counters bathing in blood, all the way down.
No resource is infinite and money is an important constraint in any engineering project. Engineering is all about making compromises. Good engineering is making the right compromises: especially when life and death decisions are being made.
Casually blaming "bean counters" is a distracting fantasy available to anyone that doesn't have to make real-world decisions. Understanding the causes of how Boeing systematically screwed up requires a bit more maturity than you appear to show. "Bean-counters" particularly comes across as childish name-calling to me, and clichés don't help either.
The fact that the MAX has been cleared to fly again shows that the design decisions were not utterly flawed.
The design decisions were acceptable, if they had admitted the fact that the new design necessitated significant new training for the pilots, who were now flying a version of the 737 that could lose positive stability in some corners of its flight envelope….a fact they buried to reduce scrutiny (or facilitate deniability) from regulators and to make it an easier sell to airlines.
Bean counters bathing in blood, all the way down.
The forward mounting of the engine nacelles could have been countered with a small adjustment of the sweep or the surface area of the horizontal stabiliser, instead of the faulty flight control software solution, keeping the aircraft an aerodynamically safe aircraft as had been earlier generations. But that would have been a de-facto admission that the fundamental aerodynamic characteristics of the aircraft as certified were changed by the forward mounted nacelles.
They chose to monkeypatch the flight control system instead of making a minor change that would have produced the inherently safe aerodynamic characteristics that the aircraft was certified with.
They did this to avoid the delay and cost that would have resulted if they had been required to prove the aircraft design was still airworthy. There’s a reason that new designs must be certified to be used in passenger transport. They tried to work around the fact that the 737 max is a substantially new aircraft by monkeypatching the FCS to compensate for a potentially dangerous aerodynamic flaw that was introduced by the new location of the engines.
They chose to produce a more profitable but potentially dangerous aircraft instead of letting the engineers do their job and make the aircraft stable with the new engines. Regulators were also complicit in the regulatory evasion. Hundreds died as a direct result of this malfeasance.
There was significant debate within Boeing about aerodynamic fixes for the forward mounted nacelles. The aerodynamic fix was rejected because it would result in additional regulatory requirements for flight testing and certification. The FCS was certifiable with a pen.
You are correct in saying that the accountants are a critical part of the company and the engineering, it’s the MBAs in leadership that I’m referring to derisively.
Except anyone who has read up on this topic knows that Boeing got fined for several billion dollars by the FAA and that the FAA has increased the training requirements and that Boeing has lost 20 billion dollars from aircraft groundings and cancelled orders.
Clearly, it doesn't look like Boeing was hurting for money whatsoever. Bean counters allocate money to billion dollar fines but they won't allocate it to safety and good engineering.
There aren't any deep or hidden truths behind the crashes. Turn off the MCAS and you don't get autopiloted into a crash, but telling pilots to turn off the MCAS would defeat its purpose, which is to save money on recertification and pilot training precisely by keeping it a secret.
Question for anyone who has used one of these analog measuring devices: the indicator seems to go all the way around before the camera zooms in to read the indicated value. Is this video actually showing the accuracy it is claiming?
I haven’t watched the whole video, but I’m assuming what they were showing was ‘move x from 0.00 to 10.00’ with the gauge showing the final move was to (actual) 10.05.
Which with how floppy that rig is, is pretty impressive.
Notably though, those gauges do need to be ‘preloaded’ (compressed into their ‘positive’ range) to be able to measure negative direction shifts, and while it looks like that was done, I can’t be 100% sure without analyzing it far more than I want to do right now.
Also, those gauges provide a degree of preload (not much, but some), which might be taking a bunch of slop out of the system and giving overly rosy accuracy numbers.
I think it’s okay that they use the contract force to remove backlash since they are actually controlling it. In fact, if you could do that well, that’s huge!
Yes. The sphere at the tip has a certain radius, and the indicator will show zero (again) when the sphere has been deflected by its radius (i.e. the contact point is exactly at the center line). When out of contact, it's essentially telling you that you're missing at least a whole millimeter to the point where you should be.
Often there is a second needle indicating which of these situations you're in, but I assume it's not considered necessary because if you're 1mm off, the situation is (in the contexts in which these devices are used) very obvious.
Those two printers are also smaller and not as accurate at high speeds. The A1 Mini's slicer automatically places parts close to the Z axis in an attempt to reduce the issues and it uses input shaping, but given the printer can lift itself off the ground at default speeds that's not a perfect solution either.
There's a reason the larger and faster printers often use the CoreXY design instead.
Of course, there's always trade offs in (mechanical) design choices. But their static accuracy absolutely is that good, which is fascinating at that price point.
Yes but they're not robot arms so it's not as fascinating. The length of the arm amplifies error so if you made a "mechanics and steppers" arm with the same positional accuracy as a printer, the motors would have to be much more precise or if you geared them down, the backlash extremely low like an industrial robot arm.
Depends on how much power you need (speed times force, acceleration and deceleration...) and how much stiffness you need (can't bend?).
Also depends on how much travel you need. It is easier to get 50 micron accuracy over a total length of 100 micron compared to a total length of 1 meter.
What their video demonstrates is mostly same-direction repeatability, not absolute static accuracy. They can correct for with backlash at individual motors, but not slop or bend in the linkages.
This uses DC motors. If you use modern 3-phase servomotors, you know more of what the motor is doing.
I have a hunch that Optimus likewise leans heavily upon inverse kinematic modeling. However not using the paper plate tech.
It would be sick if they use a pure vision ML approach to train a heuristic understanding of its own muscles, instead of these fixed rotary encoders which do not account for material deflection, sensor dislodgement, etc. sort of like meta quest player tracking in the SLAM loop.
I don't see how the second sensor would improve accuracy (or rather precision). iiuc, the second sensor allows for improved speed. Backlash of the motor (and gears and linkage) could be accounted for using a PID controller, no?
Said that, I'm impressed how precise this rather flimsy looking robot actually is.
I'm not a fan of the youtube link trend on HN, as cool as the latest robots are. I know they're encroaching on territory previously held by much heavier additive and subtractive machines.
And I am okay with YouTube when a video makes sense, but in this case they have basically crammed a short article into a video, making it more awkward to read: slides with texts and diagrams, with some background music, and only a video demonstration in the end.
I also caught myself thinking that most of the content would be more accesdible as an article - I needed to pause and rewind several times. Although the aticle would include some video fragments (the final demo and some others)
Are you saying you don't like video and would prefer text, or is it something specific to YouTube that you object to? For many topics, video is really helpful in understanding stuff.
It's a mix of both, I guess. I don't like youtube taking over for text based sources. It's less accessible, way less efficient, and feeds into the google surveillance machine.
This doesn't address your (valid) systemic concerns, but on efficiency the way I like to use links like this is by adding them to a queue for later, then, if I have an opportunity to play them while doing the dishes or commuting or just feeling like watching a television I play them back, usually at higher speed of playback than 1.0. If I'm just sitting on the couch actively watching I'll read ahead in the transcript and skip ahead if I feel like it.
I've seen in the past a different trick that is adding an IMU[1] to the robot arm. When combining two different types of sensors, it's called Sensor Fusion[2], and it's really common to put together a IMU with GPS and slap a Kalman Filter[3] for very accurate position reading.
The particularly cool thing of this video though is that they could mount the new sensor within the motor itself, making it all a lot more compact.
[1] https://en.wikipedia.org/wiki/Inertial_measurement_unit
[2] https://en.wikipedia.org/wiki/Sensor_fusion
[3] https://en.wikipedia.org/wiki/Kalman_filter
He goes into detail on the control algorithm of the project on the github. It's rather complicated... https://github.com/adamb314/ServoProject/blob/main/Doc/Theor...
wow I didn't know that Github supported MathJax! Also yeah, that's rather amazing/complicated stuff, thanks for pointing it out!
If anyone wants to build this sort of thing the new Raspberry Pie Pico 2 is both orders of magnitude more capable than the chip used here and also around half the price.
It's by far the best value for money for an introductory 32bit ARM/Risk embedded device right now.
It’s relatively old at this point, but I’m still getting excellent performance from the Teensy 4.1. It’s a little more expensive, around $30, but runs Cortex M7 @ 600MHz and includes a generous compliment of I/O and protocols.
The IMRXT1011 is $1.70 when bought in bulk from China from various vendors - it can do most of the things the Teensy has.
A reputable vendor: https://www.lcsc.com/product-detail/Microcontrollers-MCU-MPU...
The IMXRT1064 can be had at $7: https://www.lcsc.com/product-detail/Microcontrollers-MCU-MPU...
I think it's still king of the hill. I've bought dozens.
Be aware, per erratum 9, that you’ll need to include external pull-downs instead of using the internal ones
> (...) the new Raspberry Pie Pico 2 is both orders of magnitude more capable than the chip used here and also around half the price.
That's cool and all but what are the tradeoffs?
Well it's new, it'll take a while for support to stabilize. The Pico 1 took almost a year given that the whole concept of a PIO was new, but this one should get there far sooner.
You can’t judge backlash by how the robot repeats the exact same set of movements over and over. That removes hysteresis from the problem definitionally.
But they're not the same motions? The second move is to the other side.
There are larger industrial robots that use secondary encoders to improve "out of the box" accuracy for more demanding tasks. The secondary joint feedback is paired with a kinematic model of the robot structure/mechanics to accurately predict where the robot tool point actually is.
https://electroimpact.com/Products/Robots/AchievingAccuracy
Of what use are the primary encoders then?
It lets you see the position of the motor's shaft. That's used in some motor control algorithms, even if the motor's position isn't exactly the joint's position.
If I’m not mistaken, one encoder measures the position and force applied by the motor, while the other encoder measure the position of the slack of the business end of the robot.
Doesn't this youtube project infringe on the patent which this company holds, then?
https://electroimpact.com/Company/PatentFiles/US8989898B2.pd...
How is that patent even a thing from 2009? Position feedback in robots is WAY older than that. I have textbooks at least a decade older than that patent describing that very system, so I wouldn't be surprised if it falls over at the first prior use claim it encounters.
How in the world can a company get a patent on math and basic techniques that have been around for decades before the patent was even filed? I can understand materials, unique “first come” algos, brand new mechanics, but there nothing novel in that patent. There’s nothing novel about having secondary (or tertiary or ….)feedback for a system
Legit question: if I replicate a patent for a personal non profit use, is this infringement? Perhaps it is because I'm benefiting from the intelectual property.
In the USA there's an exemption for research use of patents, specifically for "amusement, to satisfy idle curiosity, or for strictly philosophical inquiry." https://en.wikipedia.org/wiki/Research_exemption
Depends where you live. E.g. in France there's a personal use exemption, in the US there mostly isn't.
!> Yes, replicating a patented invention, even for personal, non-profit use, is technically considered patent infringement. A patent grants the inventor the exclusive right to make, use, sell, and distribute the patented invention for a certain period (usually 20 years from the filing date).
Call it what it is,investigative journalism
What part of the patent in your opinion infringed by the youtube video?
Robot arms have existed long before 2015. And a lot of them use some combination of encoders. The term "secondary feedback" by itself without clarification doesn't really mean anything specific, and in terms used by the patent I would call this more of adding primary/primary feedback system. The part that the patent seems to repeat is having secondary position sensor attached to the mechanical joint of robot (I assume as opposed to encoders already builtin into the servo drive), although patentability of even that seems somewhat questionable in 2015. I am not that good at reading patents, so maybe I am missing the actually relevant/novel part of that patent.
In the video both encoders are builtin the servo, instead of attached to the arm itself, even more the extra angle sensor introduced by author is attached directly to motor before the gearbox and it's slop which is complete opposite of what the patent tries to claim. The angle servo attached to output shaft after gearbox is what all hobby servos have.
If you go through the actual claims of patent most of them are not applicable to the video. 1) "system for large-scale assembly operations, ... secondary feedback mounted to joint ...". Not suitable for large scale asembly operations, no feedback attach to joint, both feedback systems are built in the hacked servos and can't measure any slop within the joint itself or servo to joint connection. 2) angular accuracy of 0.05 arcminutes - very unlikely 4) system of claim 1 wherein the manufacturing assembly is an aerospace assembly - no aerospace assembly making here, 5) 6rotary axis and 1 linear axis - no linear axis, 6) secondary feedback system is optical encoder -> questionable whether the optical angle sensor attached before gearbox matches the definition of "secondary encoder" as described by rest of the patent, also optical encoders is typically used for describing relative postion/angle sensor based on bunch of slits and counting pulses instead of analog amplitude measurement which gives absolute position. Typically I wouldn't bother with minute differences in classification of how the angle sensor is implemented, but since patent explicitly lists very specific sensor technologies I guess it matters. Otherwise they could just claim that there is an angle sensor/encoder. 7) secondary feedback system is inductive encoder - no inductive encoders here, 8) magnetic encoder - no magnetic encoders, 9) secondary feedback system is resolver - no resolver here (as in analog angle sensor based on ac coupling change depending on angle between two parts to directly generate the sin/cos of angle), 10) "system for acurate large scale-manufacturing assembly operations, ... >3 axis robot arm, with end tool, secondary feedback mounted on rotary joint" - this just more or less restates claim 1 only this time mentioning >=3 instead of >= 6 axis for some reason and mentions an end tool. Is ballpoint pen an end tool for large-scale manufacturing operations? Also the secondary feedback thing discussed before.
BLDC motors require electronic commutation. The motor controller must read the current angle of the motor so that it knows which phases U V W to enable via six MOSFETs.
An ESC can cheat by reading out the back EMF but this only works once the motor has started spinning, such as in a drone, but in a robot arm that is supposed to hold its position.
Great, but that robot isn't doing an actual task?
I sort of struggle to see how getting good positioning accuracy from a high backlash system under zero load can have a useful application.
Maybe just lack of imagination on my part.
There is this trend that says make and buy bad hardware, the software will solve it. I haven't noticed that paying off. Tesla using webcams for self driving is an example. Boeing designing their planes and then using faulty attitude sensors is another.
I would be way more impressed if the robot did something useful. My suspicion is that its real world application capabilities are rather limited.
You have oversimplified the Boeing one: their goal was to create an efficient plane to compete with Airbus without needing the expense and delays of a new type certification.
To do this they needed bigger engines on the same frame, which in turn needed to be mounted further forward affecting flight characteristics and requiring retraining. Retraining would be a sales killer so they hacked on some software systems to attempt to make the plane fly like an older 737.
Then they can just use an iPad training course for pilots to upgrade. The augmentation had to avoid the pilot knowing about (I think) the plane getting stuck in a stall at a too high AoA (this is where my memory might be off...) so the MCAS software uses AoA sensors to nose down based on the detected AoA.
The AoA sensors were never designed to be used for a direct life and death critical use case and sometimes they got stuck or failed. MCAS only used one as an input. If MCAS incorrectly asseses a nose down is required and the pilot follows their 737 training they are having their last day. That plane is going down.
Bascially people were murdered by Boeing so at every stage of this wretched plan they can make more money.
I think you are right but Boeing was more of perhaps the worst possible asshole design, and deserves it's own league.
> If MCAS incorrectly asseses a nose down is required and the pilot follows their 737 training they are having their last day. That plane is going down.
Boeing’s argument is that an MCAS trim runaway is able to be addressed by the (memory item) Trim Runaway checklist and the crew of ET302 correctly used the STAB TRIM CUTOUT on that checklist during their attempt to save the flight. They then undid that action, in order to manually command nose-up trim (also reasonable under the circumstances, though contrary to the checklist), then stopped commanding nose-up trim while leaving the trim runaway checklist item reverted, allowing MCAS to continue the trim runaway that they’d previously correctly stopped by following basic 737 training. Then the flight was lost.
Boeing did wrong here, but their argument was that if a 737 pilot correctly executed the emergency checklist that is drilled into them during initial type training and in recurrent training, they’d be able to overcome that emergency. That falls into at least the probably technically correct category to me.
(The yoke displacement method to disconnect the autopilot was not part of the emergency checklist for stab trim runaway.)
Arguably the problem is that Boeing absolutely and utterly failed to do what they set out to do. After all, if the MCAS failures would present like the usual 737 runaway stabilizer, then the certified pilots would have been able to handle it as such. Since the "runaway MCAS" was a completely new phenomenon (one factor being the absolutely idiotic "on for a few seconds and then off for some" cycle).
And as we know the FAA also was clueless, as they approved Boeing's "safety analysis".
>>> Extensive interviews with people involved with the program, and a review of proprietary documents, show how Boeing originally designed MCAS as a simple solution with a narrow scope, then altered it late in the plane’s development to expand its power and purpose. Still, a safety-analysis led by Boeing concluded there would be little risk in the event of an MCAS failure — in part because of an FAA-approved assumption that pilots would respond to an unexpected activation in a mere three seconds.
And, just to drive whatever point home, on top of all this the FAA completely dropped the ball, because it did not notice that they allowed Boeing to break their own base conditions which in effect invalidated the safety analysis.
>>> As Boeing and the FAA advanced the 737 MAX toward production, they limited the scrutiny and testing of the MCAS design. Then they agreed not to inform pilots about MCAS in manuals, even though Boeing’s safety analysis expected pilots to be the primary backstop in the event the system went haywire.
It's understandable that Boeing wanted to avoid simulator training, but apparently this regulatory discontinuity (ie. either same or different, no in-between, as far as I understand) forced them to concentrate so much on avoiding the need for new type certification that they ended up completely believing their own crazy tale about the two models' sameness, which led to hiding information from pilots.
https://www.seattletimes.com/seattle-news/times-watchdog/the...
I think it may have been a contractual term where Boeing could avoid a $1M reduction in purchase price per aircraft (times 280 aircraft) if simulator training could be avoided for the launch customer, Southwest Airlines.
https://www.sciencedirect.com/science/article/abs/pii/S10575...
There’s some really negligent stuff, like changing how to disable auto pilot (ie, MCAS) — as the pilots of both crashed planes attempted actions that would have disabled the autopilot on previous models.
If the pilots know how this sausage is made, it aint a 737 anymore. I think thay is the reason they rolled the dice sadly.
Wasn’t the Boeing issue completely preventable with an inconsequential extra part that cost nothing? Like the short cuts actually worked but they literally went all the way to almost succeeding and snatched defeat from the jaws of victory. (Aside from all the other things they did that also contributed to disaster situations going worse)
I don't know. Maybe an expert can chime in but I think it is a hard problem because of ice etc. I think the 737Max has the problem where AoA matters more because you can get into a stall you can't get out of.
Whereas maybe before on older planes you get in a stall and you nose down to reduce AoA. You don't need a sensor to know this look at altitude etc.
So now you need perfect ten nines of reliability AoA sensors. Their use case has gone from a data point to mission critical, but the sensor is the same.
You never want to get into a stall in a large commercial jet. Private pilots are taught stall and maybe spin recovery techniques for small GA aircraft. ATP rated pilots are taught stall/spin avoidance.
Chances are, if your AoA is anywhere near the critical AoA, a competent pilot is likely aware of it. The sensors are just another safety factor on top of that to help ensure situational awareness.
Or, in the case of the 737Max, to trigger a chain of events that proved lethal to hundreds to people. That’s the secondary use of the AOA sensor in combination with the FC software that they implemented. It would have been relatively easy to integrate the AOA input with other sensors to eliminate this problem, but it would have invited a deeper look at the hazards of their design decisions.
Bean counters bathing in blood, all the way down.
> Bean counters bathing in blood, all the way down.
No resource is infinite and money is an important constraint in any engineering project. Engineering is all about making compromises. Good engineering is making the right compromises: especially when life and death decisions are being made.
Casually blaming "bean counters" is a distracting fantasy available to anyone that doesn't have to make real-world decisions. Understanding the causes of how Boeing systematically screwed up requires a bit more maturity than you appear to show. "Bean-counters" particularly comes across as childish name-calling to me, and clichés don't help either.
The fact that the MAX has been cleared to fly again shows that the design decisions were not utterly flawed.
The design decisions were acceptable, if they had admitted the fact that the new design necessitated significant new training for the pilots, who were now flying a version of the 737 that could lose positive stability in some corners of its flight envelope….a fact they buried to reduce scrutiny (or facilitate deniability) from regulators and to make it an easier sell to airlines.
Bean counters bathing in blood, all the way down.
The forward mounting of the engine nacelles could have been countered with a small adjustment of the sweep or the surface area of the horizontal stabiliser, instead of the faulty flight control software solution, keeping the aircraft an aerodynamically safe aircraft as had been earlier generations. But that would have been a de-facto admission that the fundamental aerodynamic characteristics of the aircraft as certified were changed by the forward mounted nacelles.
They chose to monkeypatch the flight control system instead of making a minor change that would have produced the inherently safe aerodynamic characteristics that the aircraft was certified with.
They did this to avoid the delay and cost that would have resulted if they had been required to prove the aircraft design was still airworthy. There’s a reason that new designs must be certified to be used in passenger transport. They tried to work around the fact that the 737 max is a substantially new aircraft by monkeypatching the FCS to compensate for a potentially dangerous aerodynamic flaw that was introduced by the new location of the engines.
They chose to produce a more profitable but potentially dangerous aircraft instead of letting the engineers do their job and make the aircraft stable with the new engines. Regulators were also complicit in the regulatory evasion. Hundreds died as a direct result of this malfeasance.
Bean counters bathing in blood, all the way down.
The accountants are part of the engineering on large engineering projects.
> instead of letting the engineers do their job
This is your central point - that you imply engineers are infalliable and therefore it most be someone else's fault.
A problem due to systematic effects. As you point out the mistakes have obvious fixes if you have perfect 20/20 hindsight.
There was significant debate within Boeing about aerodynamic fixes for the forward mounted nacelles. The aerodynamic fix was rejected because it would result in additional regulatory requirements for flight testing and certification. The FCS was certifiable with a pen.
You are correct in saying that the accountants are a critical part of the company and the engineering, it’s the MBAs in leadership that I’m referring to derisively.
Except anyone who has read up on this topic knows that Boeing got fined for several billion dollars by the FAA and that the FAA has increased the training requirements and that Boeing has lost 20 billion dollars from aircraft groundings and cancelled orders.
Clearly, it doesn't look like Boeing was hurting for money whatsoever. Bean counters allocate money to billion dollar fines but they won't allocate it to safety and good engineering.
There aren't any deep or hidden truths behind the crashes. Turn off the MCAS and you don't get autopiloted into a crash, but telling pilots to turn off the MCAS would defeat its purpose, which is to save money on recertification and pilot training precisely by keeping it a secret.
Regarding applications for robots that have to move very precisely without carrying a load, there are robotic measurement systems: https://en.wikipedia.org/wiki/Coordinate-measuring_machine
Look at this later videos where he has the servo lift a weight on a long arm.
Eurika! Other videos! Thank you for the idea.
One example of real work: https://www.youtube.com/watch?v=GCHXNcpq3OA
Question for anyone who has used one of these analog measuring devices: the indicator seems to go all the way around before the camera zooms in to read the indicated value. Is this video actually showing the accuracy it is claiming?
I haven’t watched the whole video, but I’m assuming what they were showing was ‘move x from 0.00 to 10.00’ with the gauge showing the final move was to (actual) 10.05.
Which with how floppy that rig is, is pretty impressive.
Notably though, those gauges do need to be ‘preloaded’ (compressed into their ‘positive’ range) to be able to measure negative direction shifts, and while it looks like that was done, I can’t be 100% sure without analyzing it far more than I want to do right now.
Also, those gauges provide a degree of preload (not much, but some), which might be taking a bunch of slop out of the system and giving overly rosy accuracy numbers.
I think it’s okay that they use the contract force to remove backlash since they are actually controlling it. In fact, if you could do that well, that’s huge!
I don't think they could do that sustainably while it's actually doing 'the job' though, correct? It's pretty in the way.
Yes. The sphere at the tip has a certain radius, and the indicator will show zero (again) when the sphere has been deflected by its radius (i.e. the contact point is exactly at the center line). When out of contact, it's essentially telling you that you're missing at least a whole millimeter to the point where you should be.
Often there is a second needle indicating which of these situations you're in, but I assume it's not considered necessary because if you're 1mm off, the situation is (in the contexts in which these devices are used) very obvious.
How do 3 axis robots you can buy for 100$ (3D printers) have a static accuracy of 0.05mm?
It's not control theory, but mechanics and steppers.
It’s the compromise between a gantry vs arm design
There are plenty of 3d printers without a complete gantry. The bambulab a1 mini or prusa mini to name just two.
Those two printers are also smaller and not as accurate at high speeds. The A1 Mini's slicer automatically places parts close to the Z axis in an attempt to reduce the issues and it uses input shaping, but given the printer can lift itself off the ground at default speeds that's not a perfect solution either.
There's a reason the larger and faster printers often use the CoreXY design instead.
Of course, there's always trade offs in (mechanical) design choices. But their static accuracy absolutely is that good, which is fascinating at that price point.
Yes but they're not robot arms so it's not as fascinating. The length of the arm amplifies error so if you made a "mechanics and steppers" arm with the same positional accuracy as a printer, the motors would have to be much more precise or if you geared them down, the backlash extremely low like an industrial robot arm.
Sure, there's no free lunch.
I'm curious what would be the best way to replicate an x/y (optionally z) system with 0.05mm or lower accuracy? Without sacrificing speed of course.
Depends on how much power you need (speed times force, acceleration and deceleration...) and how much stiffness you need (can't bend?).
Also depends on how much travel you need. It is easier to get 50 micron accuracy over a total length of 100 micron compared to a total length of 1 meter.
Making things lightweight is crucial, because otherwise inertia will make your toolhead deviate from the planned trajectory at any corner.
I've always wondered... Why aren't pantographs used more with robots when precision is needed?
It's used to cut precise wood pieces or carve wood or metal etc.
https://en.wikipedia.org/wiki/Pantograph
https://youtu.be/s56J_Rnh_Co
You use the "big" part to drive the "small" one, which gives it great precision.
Feynman brings up some problems with pantographs and precision in https://calteches.library.caltech.edu/1976/1/1960Bottom.pdf (page 6-7). I haven't thought about it myself, though.
Pantographs are mostly useful in 2 dimensions and any robot only needing two dimensions can just use rails which will be more accurate
that is a really great encoder trick. I wouldn't have thought it was that good, but it clearly is.
What their video demonstrates is mostly same-direction repeatability, not absolute static accuracy. They can correct for with backlash at individual motors, but not slop or bend in the linkages.
This uses DC motors. If you use modern 3-phase servomotors, you know more of what the motor is doing.
I have a hunch that Optimus likewise leans heavily upon inverse kinematic modeling. However not using the paper plate tech.
It would be sick if they use a pure vision ML approach to train a heuristic understanding of its own muscles, instead of these fixed rotary encoders which do not account for material deflection, sensor dislodgement, etc. sort of like meta quest player tracking in the SLAM loop.
I don't see how the second sensor would improve accuracy (or rather precision). iiuc, the second sensor allows for improved speed. Backlash of the motor (and gears and linkage) could be accounted for using a PID controller, no?
Said that, I'm impressed how precise this rather flimsy looking robot actually is.
With direct drive (no gear box), do we still need the secondary encoder?
No, but you will almost certainly not get the required torque for a robot arm.
Closed loop feedback is the key to high robot accuracy with cheap parts. The big trick is the position detector, and this person figured that out.
Kudos!
In addition to this technique you can also use kinematic calibration which takes it to a whole other level.
I think £300 is an important part of the title.
I'm not a fan of the youtube link trend on HN, as cool as the latest robots are. I know they're encroaching on territory previously held by much heavier additive and subtractive machines.
And I am okay with YouTube when a video makes sense, but in this case they have basically crammed a short article into a video, making it more awkward to read: slides with texts and diagrams, with some background music, and only a video demonstration in the end.
I also caught myself thinking that most of the content would be more accesdible as an article - I needed to pause and rewind several times. Although the aticle would include some video fragments (the final demo and some others)
Are you saying you don't like video and would prefer text, or is it something specific to YouTube that you object to? For many topics, video is really helpful in understanding stuff.
It's a mix of both, I guess. I don't like youtube taking over for text based sources. It's less accessible, way less efficient, and feeds into the google surveillance machine.
This doesn't address your (valid) systemic concerns, but on efficiency the way I like to use links like this is by adding them to a queue for later, then, if I have an opportunity to play them while doing the dishes or commuting or just feeling like watching a television I play them back, usually at higher speed of playback than 1.0. If I'm just sitting on the couch actively watching I'll read ahead in the transcript and skip ahead if I feel like it.
Unlike the web, youtube has a functional search ;)
I love hackernews