Project Mow — RT‑72 build log

Last updated

Teaching a commercial zero-turn to mow itself.

The Rogue Tensor 72″ is our autonomous mowing platform — a Bad Boy Rogue 72″ with the Kawasaki FX1000V EFI, retrofitted into a mower that learns a property, maps its keep-outs, and quietly takes the Saturday-morning chore off the calendar. This page is the running log of every decision that got us here.

Bad Boy Rogue 72" — three-quarter view showing the deck and the 'Rogue' badge
72″
Cutting deck
999cc
Kawasaki FX1000V
RTK
cm-grade GPS
3
cameras
ROS 2
on Jetson
01

Why the Bad Boy Rogue 72″ EFI

Before any code or sensors, we needed a chassis we could trust to take the abuse of being controlled by a computer for hours at a time. Here's how we landed on the Bad Boy Rogue.

Zero-turn, by necessity

A zero-turn radius (ZTR) mower controls each rear wheel independently with a hydrostatic transmission. From a controls standpoint, that's a gift — the platform is already a differential drive, which is the same kinematic model nearly every wheeled robot uses. We don't have to fight an Ackermann steering geometry or simulate a virtual rear wheel. Forward kinematics and odometry slot right in.

The Kawasaki FX1000V EFI

The Rogue 72″ ships with Kawasaki's FX1000V EFI — a 999 cc, 90° air-cooled V-twin rated by Kawasaki at 38.5 hp at 3,600 RPM and 57.8 lb-ft of peak torque at 3,200 RPM. It's a commercial engine in a commercial deck, and the EFI side of it matters to us for reasons that all get worse the moment a human isn't standing on the platform:

  • Cold starts without a choke. A carbureted engine wants a warm-up ritual. Autonomous operation means the mower might fire up at 6 AM in a heavy dew — the FX1000V's sequential multi-port injection just runs.
  • Electronic altitude compensation. Kawasaki's ECU adjusts the mixture for ambient conditions automatically. Carbs need to be re-jetted; the EFI doesn't care whether we're at 600 ft or 6,000 ft.
  • Power that tracks load. Kawasaki advertises that the integrated electronic throttle "matches blade speed and power to load and terrain." In practice that means steadier RPM through thick grass — steadier RPM is steadier blade tip speed, which is a cleaner, more even cut.

The Rogue, specifically

Bad Boy's commercial line is the Rogue, and the spec sheet shows it. The 72″ deck is 3-gauge fabricated with a 1/4″ top, reinforced 3/8″ sides, and a 1/2″ leading edge — meaningfully heavier-built than a stamped residential deck, which matters when the thing driving it is a state machine and not a person reading the terrain ahead. Underneath, dual 16 cc Hydro-Gear pumps drive 18 ci Parker wheel motors — commercial drivetrain components, not the integrated transaxles you find on consumer ZTRs. Top ground speed is 13 mph; curb weight is 1,595 lb.

The frame uses cast I-beam rails and a 3-link rear trailing-arm suspension with independent fronts (Bad Boy's "EZ-Ride"). The suspension matters more than it sounds: a rigid frame transmits every bump straight into the camera mounts, the IMU, and the GPS antenna. Compliance in the chassis means cleaner sensor data.

Where we bought it

For the record: we paid full retail. Bad Boy isn't a sponsor, Kawasaki isn't a sponsor, nobody is — this is a self-funded project and every part on the mower came out of our own pocket. We picked it up from Springfield Mow in Springfield, Missouri, and the experience was genuinely excellent: straight answers on what the Rogue does well, what the trade-offs are, and zero pressure to upsell into something we didn't need. If you're in southwest Missouri and shopping for a commercial ZTR, they're worth the drive.

Top-down view of the engine bay — Kawasaki FX1000V V-twin, Hydro-Gear pumps, and drive belts
02

Actuators — what we got wrong before we got it right

Two hydraulic lever pushrods, one PTO clutch. The path here had three pivots: away from hobby servos, away from "100% duty," and away from the lap bars themselves.

The temptation: hobby servos

The first instinct — and the one a lot of DIY mower builds reach for — is a pair of large hobby servos. They're fast, they're precise, and the firmware to drive one is a single PWM line. We seriously considered them. The problem is that mowing a long row holds the controls at near-full deflection for minutes at a time, and hobby servos are spec'd for movement, not for parking against a continuous load. Hold a hobby servo at stall against a return spring for a 4-hour mow and you cook the windings.

The duty-cycle reframe

Our first pass at the math said we needed a 100% duty cycle actuator. The reasoning was: the operator holds the controls forward for minutes at a time, the actuator is "working" that whole time, so a 25%-rated unit will overheat. We went looking at industrial 100%-duty offerings — bigger motors, thermal paths, $400+ per unit.

Then we noticed the property of ACME-screw linear actuators that broke the argument: they self-lock under load with zero motor current. The screw thread holds whatever position the motor reached, indefinitely, against any load below the static rating. That means the duty cycle only accrues during movement — the brief pulses to extend, retract, or correct — not during the long minutes the control is being held. In real mowing, the actuator is moving maybe 5–15% of the time. A consumer-grade 25%-duty actuator with the right force rating is plenty.

That reframing collapsed the budget by an order of magnitude and freed up the speed axis — which mattered, because the slower the actuator, the longer the stop distance when the operator releases the controls.

The tradeoff is real, and it's worth being honest about. Self-locking ACME means there's no mechanical return path if an actuator dies. If the motor seizes or the driver hangs while the actuator is parked forward, the screw holds — the mower keeps driving until something else stops it. A 100%-duty industrial servo cylinder with a back-drivable ball-screw (more on those further down) actually spring-returns on power loss; that's a mechanical failsafe baked into the geometry, and we don't get it. We accept the tradeoff because the engine-kill chain — not the actuators themselves — is our primary safety-critical stop path. If the lap bars stop responding, killing the engine kills the drive. The actuators are the day-to-day control surface; the engine-kill is the structural failsafe.

Class
Continuous use?
Why it works (or doesn't) here
Hobby servo
No
No self-locking — cooks windings under a held load
ACME-screw actuator (25% duty)
Yes, in this app
Self-locks at hold — duty only accrues during transitions
Industrial 100%-duty actuator
Yes
Overkill for an ACME-screw application; pay for capability you don't use

The bigger pivot: not the lap bars, the pushrods

The original plan was to mount actuators against the lap bar handles — the obvious place, because that's where the operator's hands go. We measured the spring force at the handle: 25 lb. Picked an actuator. Then realized that 25 lb wasn't actually the right number to size the actuator from.

Lap bars are levers. A 25 lb push at the handle gets multiplied (or divided) by the lever ratio before it reaches the pushrod that actually controls the swashplate on the hydrostatic pump. Worse, leaving the lap bar mechanism in place means the actuators have to fight slop in the linkage joints, the lap bar pivots, and the rod ends — all of which add error to closed-loop position control.

So we pivoted: tie the actuators directly into the hydraulic lever pushrods, and disconnect the lap bar mechanism entirely. Cleaner mechanically, fewer moving parts, and the operator's manual fallback path (grab the lap bars and drive) goes away — which is what we wanted, since this is a full drive-by-wire build. The engine-kill safety chain is the failsafe; the lap bars don't need to be one too.

Re-measuring at the pushrod

With the lap bar mechanism unbolted from the pushrod, we put a fish scale directly on the pushrod end and pulled it through its full travel:

  • Travel: 1″ in each direction from neutral — 2″ total throw
  • Peak force: 25 lb against the swashplate return spring

The pick: PA-01-4-56-POT-12VDC

After three rounds of spec-shopping (the Build Log has the receipts), we landed on the Progressive Automations PA-01, 4″ stroke, 56 lb force tier, with potentiometer feedback, 12 VDC. Why this specific configuration:

  • 56 lb force × 25 lb measured = 2.24× margin, comfortably above the industry rule of thumb
  • 4″ stroke × 2″ needed = 2× margin for mounting geometry tolerance
  • 1.02″/sec at full load, faster no-load — gives a stop distance of ~6 ft at 5 mph mowing speed once the swashplate spring assists the retract
  • 10 kΩ linear potentiometer included for closed-loop position control on the Teensy — absolute reading, no homing routine needed on power-up
  • IP65 native — no bellows boot needed for outdoor mower duty
  • ACME self-locking screw — the property that made 25% duty cycle adequate
  • ~$160 each, $320 for two — the cleanest balance of speed, force, feedback, and IP rating in this price tier

Two units, one per pushrod. The actuator's neutral position is mid-stroke (2″ extended); operator-commanded forward and reverse map to 75% and 25% of the actuator's travel range, leaving 1″ of margin at each end for bracket misalignment.

The two we benchmarked it against

Two other configurations made the short list before we landed on the PA-01. One was a near-miss; the other we'd have ordered in a heartbeat at half the price. Side by side:

Spec
PA-01-4-56-POT
LACT2P-12V-10
Ultra Motion A2 (e.g. A2PZ8B-B0M0E0)
Stroke
4″
2″ (1.97″ usable)
up to 7.75″
Force
56 lb
55 lb
270 lb cont. / 530 lb peak
Speed (loaded)
1.02″/sec
0.9″/sec
up to 14″/sec
IP rating
IP65
IP65
IP65
Op. temperature
5° to 40°C
−25° to 65°C
industrial-grade
Feedback
10 kΩ pot
10 kΩ pot
contactless absolute (Phase Index®)
Duty cycle
25%
not specified
100% continuous
Self-locking?
ACME (yes)
yes (holds unpowered)
configurable: ACME or ball-screw
Control interface
PWM / H-bridge + analog
PWM / H-bridge + analog
CAN 2.0B / RS-422 / 4-20 mA / ±10 V
Price each
~$148–160
$193
$1,500–$2,500
Pair total
~$320
~$390
~$3,000–$5,000

LACT2P-12V-10 — the near-miss

Concentric/Glideforce LACT2P-12V-10 via Pololu. Same force class as the PA-01, same IP rating, same feedback type, similar self-locking behavior. The one place it actually beats the PA-01 is operating temperature range — −25° to 65°C vs the PA-01's 5° to 40°C window — which would matter if we mowed in late-fall frost or 100°F+ summer afternoons.

Two things killed it. First: its 1.97″ usable stroke equals our 2″ pushrod travel exactly, which leaves zero margin for bracket misalignment, end-fitting tolerance, or wear. The actuator would slam its limit switch on every full-stroke command. The PA-01's 4″ gives us the documented 2× margin. Second: it's 30% more expensive than the PA-01 ($193 vs $148). Worse stroke margin and a higher price isn't a trade you make. The apples-to-apples LACT family upgrade is the LACT4P-12V-10 (4″ stroke), but at that point the price gap shrinks and you still take a small hit on speed and stall current.

Ultra Motion A2PZ8B-B0M0E0 — sweet specs we couldn't justify

The Ultra Motion A2 servo cylinder is a fundamentally different class of product, and the specs are genuinely beautiful: industrial servo cylinder with contactless absolute position feedback (no potentiometer wear path), field-oriented control of a brushless DC motor, 100% continuous duty, configurable as either self-locking ACME or back-drivable ball-screw, with a choice of CAN 2.0B, RS-422 serial, 4-20 mA, or ±10 V control inputs. Software-defined end-of-stroke limits replace mechanical limit switches.

The case for it isn't subtle:

  • Back-drivable ball-screw option: a failed actuator would spring-return rather than locking the pushrod wherever it died.
  • CAN bus control: the Teensy could bypass the analog ADC pot-feedback path entirely and read absolute position digitally over a noise-immune bus.
  • 100% duty cycle: the entire ACME-screw-self-locking duty-cycle argument from a couple sections up becomes irrelevant. It just runs.
  • Phase Index® contactless feedback: no wiper pot to wear out over years of vibration.

The case against it is the price tag: $1,500 to $2,500 per unit, depending on configuration. Two of them is $3,000 to $5,000 — for one mower's drive actuators alone, not counting the PTO. The PA-01 covers the same control job at roughly one tenth the cost. We took the savings and put them into LIDAR, RTK GPS, and compute, where the dollars buy more capability than they would on actuator over-engineering.

If this build were destined for a customer or shipping as a kit to other people, the calculus would shift — the A2's reliability margin and back-drive failsafe earn their keep when you can't personally service the machine. For a one-off build where the engine-kill chain handles the actual safety-critical stop, the PA-01 is the right tool at the right price.

The PTO clutch is its own animal

Engaging the blades is a single 12 V solenoid clutch — binary on/off, switched by a logic-level MOSFET behind a flyback diode. We kept this dirt simple on purpose: when the safety system says "blades off," there's exactly one wire to interrupt.

03

What we learned about cameras

Sun. Dust. Vibration. Bumps. The yard tries very hard to make a camera lie to you.

Global shutter, always

Rolling-shutter sensors give you "jelly cam" the moment the chassis hits a tree root. Every line is exposed at a slightly different time, so straight fence posts come out curved. You can maybe dewarp it in software; you absolutely cannot trust the geometry for SLAM or obstacle distance. We pay the global-shutter premium.

Dynamic range matters more than megapixels

A 12 MP sensor staring at a deep shadow next to a sunlit driveway just clips everything. A 2 MP sensor with proper HDR / wide-dynamic-range modes (think IMX462 class) reads both ends of the histogram. We'd rather have a clear 2 MP image than a useless 12 MP one.

Stereo is great until the lenses get dirty

Stereo depth assumes both lenses see the same world. Pollen on one lens and not the other and your disparity map turns into modern art. We baked in two mitigations: hydrophobic lens coatings, and a confidence threshold that throws out points whose left/right matches don't agree.

Wide field of view, low distortion

A 120° horizontal FOV gets us peripheral awareness without going full fisheye. Past ~140° the edge pixels carry so little angular resolution that we'd rather just add a third camera.

USB 3.0 over GigE, for now

GigE Vision is the "right" answer industrially — deterministic, long cable runs, PoE. But for a single-vehicle build with the compute mounted three feet from the cameras, USB 3.0 is half the cost and half the configuration. We can revisit when we want PoE-powered cameras on the deck corners.

Vibration kills mounts, not sensors

The CMOS itself shrugs off mower vibration. The thing that fails is the lens locking ring backing off and slowly defocusing the image over a 40-hour mow. Loctite and a witness mark on every lens. Boring, essential.

camera-rig.jpg The forward-facing stereo bar mounted to the ROPS

Hardware on this build

Not sponsors. Not partners. We bought their gear at retail — this is just credit where credit's due.

04

Raspberry Pi vs. Jetson

Early plans had a Raspberry Pi 5 doing the whole job. Once we wrote down the perception loop we actually wanted, the math stopped working.

Raspberry Pi 5

Considered
  • Massive community, every accessory you can imagine
  • Low power draw — easy to keep cool
  • No real GPU. CPU inference of even a small detector is single-digit FPS
  • Limited camera bandwidth via CSI when running multiple streams
  • No CUDA — locks us out of the easy ML ecosystem

Jetson Orin Nano Super

Picked
  • 1024 CUDA cores + 32 tensor cores — real-time vision
  • 67 TOPS in MAXN_SUPER mode (the "Super" variant unlocks this in JetPack 6.1+)
  • JetPack ships with CUDA, cuDNN, TensorRT, ROS 2 builds
  • 256 GB NVMe SSD on board — plenty of room for maps, models, and session logs
  • Hotter and hungrier — serious thermal planning required

How we decided

The deciding factor was the perception loop. Once you commit to running stereo depth, obstacle classification, and grass-vs-not-grass segmentation against multiple camera streams in parallel, you've left the Pi's comfort zone. CPU-only inference of even a small detector model is single-digit FPS on a Pi 5; an add-on AI accelerator helps, but you're still working around limited camera bandwidth and a software stack that wasn't built for it. The Jetson Orin Nano Super is built for exactly that workload — CUDA, TensorRT, MIPI CSI-2, the whole pipeline.

It is now in hand and the perception path is alive. We're running Ultralytics YOLOv8 on a live camera feed, with the model executing on the GPU and tensor cores while the rest of the system (Flask, Socket.IO, control loop) keeps running on the CPU side. Detections come back in real time at high confidence:

YOLOv8 inference on the Jetson Orin Nano Super — a dog identified at 0.91 confidence in an indoor scene, drawn as a magenta bounding box over the live camera feed
Live YOLOv8 inference running on the Orin Nano Super. The model is the stock pretrained YOLOv8 weights; the bounding box and label are drawn by the inference pipeline before the frame is forwarded to the Flask UI. 0.91 confidence on a partially-occluded dog at camera distance — the kind of detection the mower needs to brake for.

Getting JetPack onto the NVMe was harder than it should have been

The official NVIDIA workflow says: flash the preconfigured SD card image, boot it, then clone to NVMe with a helper script. We did the first part — the SD image booted cleanly and we ran through first-boot setup. The clone step is where it fell apart, and the cause was deceptively simple: a size mismatch. Both drives were nominally 256 GB, but the preconfigured SD card was using essentially all of its 256 GB while the NVMe only exposed about 238 GB of usable capacity after manufacturer over-provisioning. A literal block-level clone wouldn't fit. We tried to shrink the SD card's rootfs partition first — resize2fs and sgdisk surgery, all of which technically worked — but the clone-and-boot dance that followed kept turning up edge cases. After a half-day of partition rework, we cut bait on the clone approach entirely.

The path that finally worked: flash JetPack 6.2 directly to the NVMe via NVIDIA SDK Manager from a Windows host, with a known-good USB-C cable — in our case, a GoPro accessory cable, because the random USB-C cables we had on hand couldn't sustain the data rate the flash needed and kept dropping mid-process. Skip the SD card entirely. Skip the clone. Flash to the NVMe, boot from the NVMe, and the "preconfigured SD" path becomes a tire-kicking exercise instead of a deployment one. The other gotchas (an out-of-date Ubuntu base needing hundreds of packages updated, having to compile PyTorch from source for Jetson aarch64) are documented in the Build Log.

05

Picking a LiDAR

We're going off-the-shelf — we just haven't locked in which off-the-shelf yet. Here's the shape of the decision.

What the LiDAR actually has to do

The job isn't mapping. The job is "what's in front of me that wasn't there yesterday." A toy in the yard. A hose. A sleeping cat. We already know the boundary from the teaching pass, and we expect the ground to be roughly planar, so the sensor's job is obstacle detection inside the volume immediately ahead of the mower — not 200 m range, not centimeter-grade accuracy. The bar is "never run over the cat."

The classes we're considering

  • 2D spinning (RPLidar-class): Cheap, well-supported in ROS, easy to integrate. The catch is a single horizontal scan line — a dog at 50 cm looks identical to a fencepost. Probably enough to stop for an obstacle, not enough to characterize it.
  • Multi-line spinning (Slamtec Mapper / RPLidar A3 with tilt, Hokuyo multi-line): A handful of stacked beams gets us closer to a real 3D occupancy grid for not much more money. The bandwidth and integration are still ROS-friendly.
  • Solid-state automotive (Livox Mid-360, Unitree L1): Real 3D point clouds, no moving parts to wear out, the kind of data robotics papers use. More expensive, and the non-repetitive scan patterns take some work to integrate cleanly, but they're firmly in reach for a serious DIY build.
  • Premium spinning 3D (Velodyne / Ouster): Beautiful data. Wrong order of magnitude on price for a residential mower. Off the table.

How we'll decide

The honest answer is "once the cameras and Jetson are integrated." Stereo depth from the cameras might cover more of the obstacle-avoidance load than we initially expected, in which case a 2D spinning unit is plenty as a backup. If stereo struggles — sun glare, dappled shade through trees, low-contrast grass-on-grass targets — we'll move up to a Livox-class solid-state unit. We'd rather decide with real perception data than guess from a spec sheet.

Sensor fusion plan

Whatever LiDAR we pick will feed an occupancy grid at the planner level. Stereo cameras feed the same grid. Disagreements are voted on by the safety supervisor: if either sensor says "stop," we stop. Ugly redundancy and we love it.

06

RTK — the centimeter that changes everything

A standard GPS fix is good to maybe three meters. A mower wandering by three meters eats the flowerbed. RTK gets us to two centimeters — but the question was whether to build our own base station or piggy-back on Missouri's free network.

What RTK actually is

A normal GPS receiver computes its position from the timing of satellite signals, and the atmosphere mangles those timings just enough to put you a few meters off. Real-Time Kinematic (RTK) cancels that error by comparing your rover's measurements against a second receiver — the base — sitting at a precisely surveyed point. The base broadcasts the local error correction over a stream called NTRIP, and your rover applies it in real time. Two receivers, one stream, and you're suddenly accurate to about 1–2 cm.

Option A: build our own base

The DIY route uses a multi-band receiver like the u-blox ZED-F9P with a survey-grade helical antenna mounted on a tripod or rooftop monument. You let it average a position for 24 hours (longer is better), feed the result back as the base's "known" location, and run an NTRIP caster on a Raspberry Pi to broadcast corrections.

  • Hardware: ~$700 for a ZED-F9P board, ~$300 for a decent antenna, ~$80 for a Pi caster, plus mounting and weatherproofing.
  • Pros: Works without cellular. No baseline-distance penalty — the base is right there. Total control.
  • Cons: Up-front cost. Needs a clear sky-view spot with mains power and reliable network. The base location only gets repeatably accurate, not absolutely accurate, unless you tie it to a CORS station — which loops us right back to the public network.

Option B: free MoDOT corrections

Missouri runs MoDOT GNSS, a public Continuously Operating Reference Station network covering the whole state. Anyone can register for a free NTRIP account and pull corrections from the nearest reference station — or, better, from a Virtual Reference Station the network synthesizes near your rover. The state maintains the monuments, surveys the antennas, and runs the casters. It's a public good and it's outstanding.

  • Cost: $0. Free account, free corrections, free forever.
  • Coverage: Reference stations every 50–70 km across Missouri. Most properties are within 20 km of one — well inside the "fast fix, tight accuracy" envelope.
  • Catch: The rover needs internet to pull the stream. No cell, no corrections, no RTK fix.

What we actually do

We use MoDOT as the primary source. A 4G modem on the mower pulls a VRS stream over NTRIP and feeds it into the ZED-F9P's correction port; first fix typically resolves in under 30 seconds at the start of a mow. When we lose cell signal — back of the property, behind the tree line — the receiver falls back to RTK Float and the mower auto-pauses until Fixed is restored. The mission-control UI surfaces this with a yellow GPS pill in the header so you always know which mode you're in.

And we still want our own base

MoDOT is excellent, but a private base is a great belt-and-suspenders: it works during cell outages, and during the brief windows when the public caster goes down for maintenance. We're prototyping a Pi-based caster on the workshop roof now. If/when it's reliable, the rover will prefer the local base and silently fall back to MoDOT.

07

The software stack

A phone-first mission-control web app on top, ROS 2 in the middle, a safety MCU at the bottom. Each layer has exactly one job.

UI

Mission control PWA

A single-page web app you install to your phone like a native app. Boundary teaching ("walk the perimeter, I'll record"), keep-out drawing, mow start/pause/stop, live telemetry, session history, return-home. Works offline once map tiles are cached. Built so the mower itself is the server — you connect to its WiFi and you're in.

Brain

ROS 2 perception & planning

On the Jetson: stereo + LiDAR fusion into an occupancy grid, RTK-GPS into the EKF, nav2 for local planning, Fields2Cover for coverage path generation across the boundary polygon. State-machine over the top so "mowing," "transit," "RTK degraded — pause," and "return home" are explicit, observable modes — not flags.

Reflex

Safety MCU firmware

A Teensy 4.1 sits between the brain and the actuators. It does three things and nothing else: closes the actuator position loop at 1 kHz, runs a 500 ms watchdog (no command => lap bars to neutral, blades off), and listens for the physical e-stop. If the Jetson kernel panics, the MCU doesn't care — it just keeps running the watchdog.

What it does, day to day

01

Teach a property once

Walk the boundary with the phone. The mower records every step as RTK fixes and simplifies the trace into a polygon. Add keep-outs the same way.

02

Plan a coverage pattern

Fields2Cover generates an efficient mow with proper headland turns. The user picks stripe direction; the planner does the rest.

03

Mow with live awareness

Perception runs on every camera frame. New obstacle? Stop, classify, route around if possible, alert the user if not.

04

Degrade gracefully

RTK drops to Float? Pause and wait. Camera goes blind? Reduce speed and lean on LiDAR. Connection drops? Lap bars to neutral, immediately.

05

Come home

Explicit "return home" command from the UI, or automatic at end-of-mow. Follows a recorded home path so it never improvises a route through the gate.

06

Log everything

Every session writes a JSON log: path, fuel burn, RTK quality history, obstacle events. We mine these to figure out what to fix next.

ui-screenshot.jpg Phone screenshot of the live mow screen
08

What's next

  1. Order the linear actuators. Two PA-01-4-56-POT-12VDC units from Progressive Automations, plus the matching 6-pin extension cables.
  2. Bench test the actuators. Drive them with a 12 V bench supply and the Teensy. Sweep the full stroke, verify the pot feedback is clean, confirm the built-in limit switches work, and measure real-world speed and stall current.
  3. Mount to the mower. Design and fabricate frame-rail brackets that tie the actuators directly into the hydraulic lever pushrods, with the lap bar mechanism bypassed entirely. Mock up the geometry on the bench before drilling anything on the actual machine.
  4. Build the control box. An enclosure to house the Teensy, the signal-switching ICs for the OEM safety chain, the isolated 12 V →  5 V DC-DC, the throttle DAC, the start/ignition relays, and the Jetson itself — with weather-sealed cable entries for actuator harnesses, camera PoE, GPS, and Xbox controller.
  5. First drive. Mower running, Jetson + Xbox controller + actuators all wired in. Slow crawl first, then full throttle range, full reverse, full forward. Verify the kill chain drops the engine the instant the controller goes idle. This is the moment drive-by-wire stops being a plan and starts being a mower.
  6. Pick the LiDAR and cameras. Lock in final sensor selection, order, and bench-verify the drivers under the Jetson before they ever touch the mower.
  7. Mount, integrate, and test. Sensors onto the mower, perception graph on the Jetson, planner connected, Flask UI talking to ROS 2 over WebSocket. Then test, test, test — in increasingly difficult yard, weather, and obstacle conditions until we trust the thing.
08

Build log — what changed and why

We're keeping every architectural pivot honest. New entries on top, oldest at the bottom. If a section above contradicts something here, the dated entry wins.

Lessons learned

Flashing the Jetson: SD → NVMe was a saga

The official NVIDIA path says: flash the preconfigured SD card image, boot it, then clone to NVMe with a helper script. We did the first part — the SD image booted cleanly and we ran through Ubuntu's first-boot wizard. Cloning to NVMe is where it fell apart, and the cause was simpler than we initially chased.

Both drives were marketed as 256 GB, but the preconfigured SD card was using essentially all of its 256 GB capacity while the NVMe only exposed about 238 GB of usable space after manufacturer over-provisioning. A block-level clone simply wouldn't fit — the source was bigger than the target. The clone scripts didn't have a clean way to shrink-on-the-fly, so the copy aborted partway through. We spent hours on partition rework on the SD card side (resize2fs to shrink the ext4 filesystem, sgdisk to shrink the partition table to match) trying to massage the source into something small enough to clone. The shrink itself worked, but every retry of the clone-then- boot sequence turned up new edge cases. The NVMe wasn't damaged at any point in this — we just couldn't get a working OS onto it through the clone path.

The path that finally worked: flash JetPack 6.2 directly to the NVMe via NVIDIA SDK Manager from a Windows host, using a known-good USB-C cable. We tried SDK Manager from a Linux VM on the Mac first (UTM with USB-C passthrough) and it failed because the Tegra device disappears and reappears on the USB bus multiple times during a flash, and the VM USB driver couldn't keep up. Switching to Windows on bare metal fixed the USB stability problem — but only after we swapped to a real data-grade USB-C cable. The cheap charging cables we'd been using couldn't sustain the data rate; the GoPro accessory cable in our drawer could. First clean flash on that combination.

One more piece of setup we didn't see coming: the Orin Nano Super dev kit doesn't have a dedicated recovery-mode button. To put the Jetson into the bootloader state where SDK Manager can actually talk to it, you have to short two pins on the FRC header with a jumper (or a paperclip in a pinch) while powering on. We spent real time wondering why SDK Manager couldn't see the device on USB before we found the right pins to short. It's documented in the carrier board user guide, but it's easy to miss if you're following SDK Manager's own walkthrough — that walkthrough assumes a button, and the dev kit doesn't have one.

Other gotchas that ate hours after the flash succeeded:

  • The Ubuntu install in JetPack 6.2 ships very stale — hundreds of packages had updates available on first boot. sudo apt full-upgrade took the better part of an hour.
  • PyTorch from PyPI doesn't have CUDA on aarch64 Jetson, and the NVIDIA-provided wheel for our exact JetPack version had Python ABI mismatches. We ended up compiling PyTorch from source against the Jetson's CUDA 12.x — a ~3-hour build, but produced a clean, working install.
  • Cheap USB-C cables will lie to you. The cable that came with the laptop charger didn't carry data fast enough to flash. The GoPro cable did. If you're flashing a Jetson, use a known-good data cable — ideally one rated for USB 3.x or USB4.

Once the install was clean and PyTorch was working, moving the existing mower stack onto the Jetson was uneventful: same Python, same Flask, same Socket.IO, just running on much more capable hardware. We added an Ultralytics YOLOv8 inference pipeline to the live camera feed and it's catching real objects at high confidence (see the dog detection in the Raspberry Pi vs. Jetson section above — 0.91 on a partially-occluded subject is a great start).

What we'd tell next-time-us: skip the SD-to-NVMe clone path entirely. Either use a Linux box (or Windows on bare metal) with a known-good USB-C data cable to flash JetPack directly to the NVMe via SDK Manager. The preconfigured SD card path is convenient for kicking the tires, but it's not for production deployment. And don't trust your USB-C cable until you've watched it complete a flash.

Documented

Actuator: head-to-head against LACT2P-12V-10 and Ultra Motion A2

Re-verified the PA-01 pick against the two natural alternatives by pulling current specs straight from the manufacturers' product pages (lesson from the PA-04-HS misread earlier this week: never trust spec memory, always re-fetch).

LACT2P-12V-10 (Concentric / Glideforce, $193 each): same IP65, same 10 kΩ pot feedback, wider operating temperature range than the PA-01 (−25°C to 65°C). But its 1.97″ usable stroke equals our measured pushrod travel exactly — zero margin for fit-up tolerance — and it's 30% more expensive. The 4″ LACT4P-12V-10 would solve the stroke issue but the price-vs-PA-01 advantage closes to nothing.

Ultra Motion A2PZ8B-B0M0E0: sweet specs — 100% duty, contactless absolute position feedback, optional back-drivable ball-screw, CAN 2.0B / RS-422 / analog control inputs. $1,500–$2,500 per unit puts a pair at $3,000–$5,000, which is more than the rest of the perception stack combined. Omitted on price alone — the PA-01 covers the same control role at roughly one tenth the cost, and the engine-kill chain (not actuator back-drive) is our primary safety-critical stop path.

Decision unchanged: PA-01-4-56-POT-12VDC, 2× stroke margin, 2.24× force margin, IP65, ~$320 for the pair. Alternatives written up in the Actuators section above so future-us doesn't re-litigate.

Locked in

Actuator: PA-01-4-56-POT-12VDC

Final pick after three iterations. Earlier in the day we had this entry pointing at PA-04-HS-4 based on memory-recalled specs that turned out to be wrong — the real PA-04-HS is 400 lb at 0.35″/sec, not 50 lb at 2″/sec. Live web verification on Progressive Automations' actual product pages exposed the gap.

The PA-01 family at 4″ stroke / 56 lb / pot feedback (progressiveautomations.com/products/pa-01-pot) is the genuine sweet spot:

  • 56 lb × 25 lb measured pushrod force = 2.24× force margin
  • 4″ stroke × 2″ needed = 2× stroke margin
  • 1.02″/sec at load → ~6 ft stop distance at 5 mph (with spring assist on retract)
  • 10 kΩ pot feedback included — absolute, no homing routine
  • IP65 native — no bellows boot
  • ~$160/unit, ~$320 for two — cheapest path that includes feedback at this force tier
Changed

Pivot: drive the pushrod, not the lap bars

Original plan was to mount actuators against the lap bar handles. Then we realized the lap bar is a lever — the 25 lb we measured at the handle wasn't actually the force the actuator would see. Worse, leaving the lap bar mechanism in place means the actuator has to fight slop in pivots, rod ends, and joints, all of which add error to closed-loop position control.

Pivoted to tying actuators directly into the hydraulic lever pushrods, disconnecting the lap bar mechanism entirely. Cleaner mechanically, fewer moving parts, and the operator's manual fallback path goes away — which is what we wanted for a full drive-by-wire build. Re-measured at the pushrod (with the lap bar mechanism unbolted): 1″ travel each direction (2″ total) and 25 lb peak force.

Changed

Reframe: 25% duty cycle is fine for self-locking actuators

We were going to spec a 100%-duty-cycle industrial actuator (Linak / Thomson Electrak class, $700+ each). Reasoning was that the operator holds the controls forward for minutes at a time, so the actuator is "working" the whole time and a 25%-rated unit would overheat.

That reasoning was wrong. ACME-screw linear actuators self-lock under load with zero motor current — the screw thread holds whatever position the motor reached, indefinitely. So duty cycle only accrues during movement, not during the long minutes the control is being held. In real mowing, the actuator is moving 5-15% of the time. A 25%-rated consumer-grade actuator with the right force rating is plenty.

Collapsed the actuator budget by 4-5× and freed up the speed axis — which mattered, because slower actuators mean longer stop distances when the operator releases the controls.

Changed

Compute: Raspberry Pi 5 → Jetson Orin Nano 8 GB

Original plan was Pi 5 4 GB with the official AI Kit (Hailo-8L NPU, 13 TOPS). Real-world considerations pushed us up to Jetson Orin Nano 8 GB Dev Kit: 40 TOPS of AI compute, 8 GB LPDDR5, hardware H.264/H.265 decoders for the multi-camera RTSP streams, M.2 NVMe slot, and 12 V DC barrel jack input that takes the mower battery directly (no buck converter).

Tradeoffs: ~$170 more, smaller hobbyist community than Pi, more finicky JetPack setup. We accept those for the autonomy ceiling — the trajectory is toward object detection and semantic segmentation, both of which need real GPU compute the Pi doesn't have.

Locked in

Camera architecture: 4× PoE Reolink + 1× CSI-direct ML camera

After the camera-research rabbit hole (single-sensor 360°, multi-sensor housings, consumer 360° cameras, etc.), the cleanest split:

  • 4× Reolink RLC-510A PoE bullets for 360° operator awareness — one each at front-left, front-right, rear-left, rear-right. ~$240. Live on the PoE switch network with the LIDAR.
  • 1× Arducam IMX415 onboard-ISP MIPI module for front-facing ML inference — ~$179, IP67 housing fabricated DIY. CSI direct to Jetson for 30 ms latency and clean (uncompressed) pixels into the classifier.

Why two paths instead of one: the 360° cameras are for operator viewing (humans tolerate H.264 compression artifacts and 200 ms latency), while the ML camera is for safety classification (kid/dog/sprinkler-head detection) where compression artifacts and latency hurt model accuracy. Different roles, different optimizations.

Locked in

Safety: engine-kill is the failsafe, not the lap-bar return spring

Caught a flaw in the previous safety reasoning: with self-locking ACME-screw actuators, the lap-bar return spring cannot back-drive the actuator on power loss (screw static hold force ~500 lb ≫ spring's 25 lb). The spring isn't a power-fail failsafe.

The real failsafe is engine-kill. The mower's existing seat-kill switch + PTO interlock chain stays intact, in series with a Teensy-controlled ignition relay. On any fault — operator off seat, controller signal loss, software watchdog — ignition cuts, engine dies, hydrostatic pump stops, wheels lock from hydrostatic braking. Mower stops in <1 second regardless of where the actuator left the pushrods.

Locked in

Perception stack: Livox Mid-360 + 4× Reolink + ZED-F9P RTK

After researching the LIDAR options (RPLIDAR A2M12 fails outdoors due to sunlight interference; the S-series and Livox are dToF and sun-immune): the Livox Mid-360 at ~$700. 3D, 360° × 59° FOV, IP67, 100k-lux sun immunity, 40 m range, built-in IMU, mature ROS2 driver. Same sensor that's shipping in the Dreame A1 commercial robotic mower.

Cameras and RTK GPS settled in the same week (see the Cameras and RTK sections). Cat5e/PoE for everything that runs on the mower exterior; USB GPS lives inside the compute enclosure. One cable type for serviceability.