metaruss

Author Archive

MythTV on a Raspberry Pi 4 (and 5)

by on Mar.27, 2025, under Computing, Linux

So I finally bit the bullet and stood-up a MythTV front end on this Raspberry Pi 4 that had been hanging around the office for years.  I had back-ordered one during the post-COVID supply chain crunch and it turned up many months (possibly more than a year) later, but I hadn’t had the motivation to tackle the project.  The Raspberry 5 announcement reminded me, so I ordered one and decided to try to get Myth working on the RP4 to start.  There are some issues with the decoder and graphics software stacks on the RP5 at the moment, but I knew the RP4 would work.

I started with a degree of ignorance and built it in the same way that my other front ends are setup: using Christian Marillat’s “dmo” packages from deb-multimedia.org and running the front end solo in an auto-login X session.  This worked fine, but I was not getting good performance and could not seem to get the playback profiles setup the way I needed them.  This led me down a rabbit hole; it turns out that a lot of the information in the MythTV Wikis is obsolete and does not apply to modern releases.  For example, the recommendation is to use the OpenMAX decoder but no such decoder exists in MythTV 0.33+.  The V4L2 decoder is there, but it is buggy and tends to lockup sometimes.  The standard decoder drops frames and playback is jerky no matter how I set the read-ahead or fiddle with CPUs.  I even tried overclocking it.

While searching for details, I kept seeing references to “mythtv-light“.  There is not a lot of explanation as to what it really is or why it is needed.  What is so light about it?  The wiki talks about the back end, but very little about the front end.  What build flags are used?  Why is it distributed via a random Google Drive?  It all seemed very sketchy, so I wanted to avoid it.  Eventually I found a separate git repository that contains packaging scripts for MythTV for a variety of platforms.  In there were the scripts for mythtv-light, which seems to focus mainly on making installation as simple as possible from a single package without too many outside dependencies.  It’s not really there to support RPi, necessarily.

That said, in terms of the front end (which is all I care about), the RPi MythTV Light packages seem to enable three main things:

  1. QPA EGLFS rendering support.
  2. Specific decoder and OpenGL support for the Raspberry Pi.
  3. Disabled support for all unnecessary libraries and features.

In terms of overall speed and efficiency, is really important on the Raspberry Pi.  By directly supporting Qt’s EGL platform abstraction layer, there is no need to run X at all.  Christian’s dmo packages do not seem to support this, as they are build against Qt for X.  used to be very important, but that no longer seems to be the case.  I believe this is because the version of ffmpeg that ships with Raspian is already setup to use whatever hardware decoders are currently supported on the RP4.  The “Standard” decoder in Myth uses ffmpeg, so it “just works” without any special support for RPi-specific OpenMAX libraries.  is a nice-to-have, but with 8GB of RAM and X not running I don’t know that is matters so much for the front end on the newer RPi’s.  I am getting very good performance on the RP4 and it can *just about* manage to decode 1080p HEVC with only an occasional skip when the bit rate gets too high.  The CPUs are pretty busy, though.

Proper cooling is a must or the SoC will hit thermal throttling.  The above RP4 that I used as a test bed just has a base plate and a small heat sink on the SoC.  It is not enough: the package temperature quickly rises to 60C under heavy load.  There are a few “brands” that list a passively-cooled case on Amazon like the one pictured on the right (there are versions available for RP4 and RP5).  It comes with thermal pads for all of the hot packages as well as the bottom of the PCB.  The RP4 still gets toasty in mine, but it works well enough if left in the open air (i.e. strapped to the back of the TV).  An RP5 is perfectly content in one of these.

I can’t get access to edit the MythTV wikis, so I’ll document my findings here:

  • The advice on how to auto-start the front end using cron is…not ideal.  You can absolutely start it from the mythtv user account’s .profile.  You just need launch it only on a tty session and not for an ssh session.  I have mythtv setup to auto-login to tty1, so that is easy.  I have a few other configurable options in there, so I’ll share those files here (you can rip out what you don’t need).

    mythtv-autologin.tar.gz

  • For the issue of the keyboard not working (important if you use a remote that masquerades as a keyboard like I do), it is just a permissions issue.  Add the mythtv user to the “input” group in /etc/groups and reboot.  Do not start MythTV from /etc/rc.local as the Wiki suggests; that will run the front end as root, which is a bad idea.
  • The table showing how to setup the custom playback profile is out of date.  There is no MMAL decoder.  Use the “Standard” decoder instead.  The V4L2 decoder is still listed, but it is not very stable in my experience.  The other fields seem to work OK (4 CPUs, etc).  Use the “Medium quality” deinterlacer, not the low quality one as I have read in a few places.  I got horrible flicker with that one.
  • Increase the read-ahead buffer in the advanced settings to quell some of the random stuttering you might see, depending on your network’s performance.  I set mine to 400ms, but you can go higher.
  • There is no need to set the gpu_mem value in the firmware config.  The defaults for RP4 and RP5 are fine.  I am overclocking my RP4 as follows:

    arm_freq=2147
    over_voltage=6
    gpu_freq=750

With that feather in my cap, I decided to try a Raspberry Pi 5.  I’ve read a lot of conflicting opinions about its viability as a media player, but on paper it looks like a slam-dunk.  As with everything on Linux, the issues seem to stem more from the software support and not so much from the hardware.  The hardware is very capable.  All of the above applies to the RP5 just fine, it just needs a few workarounds at the moment.  These will probably not be needed in the future as support for RP5 matures.

  • The front end will fail to start EGLFS.  This is because the RP5 has two devices for the two GPUs and EGL needs to use the second one for 2D OpenGL.  This can be addressed by passing the QPA layer a configuration file that specifies what device and display port to use.  Mine is included in the above archive.

Once it’s up and running, the RP5 works great as a front end.  Watching the same 1080p HEVC video that made the RP4 get out of breath caused the RP5 to hardly break a sweat.  I have not tried 4k yet, but it looks promising.  Things should only get better as support for hardware acceleration improves.

Leave a Comment :, , more...

MythTV Frontend Saga

by on Mar.26, 2025, under Computing, Linux

I’ve been using MythTV for more than two decades now.  I have separate front and back ends, as I’ve always had some sort of server running in the basement that is on all the time.  Our first front end was an original Xbox with a Cromwell BIOS running Xebian.  It was just powerful enough to do the job at standard definition and we already had the DVD remote accessory, so it was the perfect choice.  The Xebian project was eventually abandoned and despite my efforts to keep Debian on the Xbox going, software bloat made the experience rather sluggish.  A VIA EPIA M10000 Mini-ITX system took its place in our living room, while the Xbox moved into our bedroom.  We used the EPIA for many years until the capacitors started to fail.

At this point, we had our first HDTV in our basement: a Sony Wega KD-34XBR960 (what an epic boat-anchor of a CRT that was).  I was using an ASUS A8N-VM based PC for the front end so that we could watch HD videos and I wanted something to replace the EPIA system that could at least decode HD on our living room SD TV without issue.  nVidia, with their ION 2 chipset, was the only show in town with efficient, hardware-accelerated H.264 decoding on Linux that MythTV also natively supported.  I picked up an ASUS AT3IONT-I Deluxe and built a new front end around it.  The CPU is quite modest (Intel Atom 330), but this board is all about the integrated GPU and nvdec support.  It worked fantastically and as a bonus the “Deluxe” version came with a remote that sort-of worked (remote controls and MythTV are a whole other thing).

Eventually it came time to retire the old Xbox and turn it back into a game console: it was getting unusably sluggish and there was no hope of watching any HD programming on it (even though the TV was SD).  I wanted to get another identical ION based board, so I picked up an ASUS AT5IONT-I Deluxe.  It had a noticeably faster CPU (Intel Atom D550), but was otherwise pretty similar.  We used these as our main front ends for many years.

Then came the troubles: nVidia started obsoleting their older drivers and the GPUs on these ION boards were not supported by any of the newer drivers.  I managed to limp things along for a few more Debian OS upgrades until it was no longer possible to shoe-horn the required, ancient binary drivers into modern X servers.  The open-source nouveau drivers were and still are pretty terrible and did not seem to support the decoder blocks at all.  The only way to keep using these old systems was to freeze the OS versions.  This only worked to a point, as it became difficult to support newer versions of MythTV on older version of Debian without doing custom builds…which I grew tired of.  I ran into the same problem on some of my older laptops and other machines with nVidia GPUs as well.  I will not be buying any new nVidia-based hardware for the foreseeable future, as I like to reuse old hardware for other purposes.  nVidia has decided to make that impossible by keeping even their most obsolete hardware closed.

Then there was the issue of HEVC.  The hardware decoder blocks on these boards were several generations too old to have any support for H.265 decoding and the CPUs were far too modest to handle software decode.  My only choice is to avoid HEVC files entirely, but that is getting more and more difficult.  H.265 is a far superior codec for dealing with HD and especially 4k, so it is kind of silly to try to keep dancing around the problem.

Finally there is the issue of power.  Running all these machines is neither cheap nor wise.  I should be using something more efficient that preferably shuts-down when the TV is off.

The obvious choice is the Raspberry Pi.  I have been exploring this on and off for at least a decade.  I bought a first-generation one when it first came out in 2012.  It is a fun little toy that I used on a few little projects, but it didn’t occur to me to try to use it as a MythTV front end.  It seemed far too modest.  I felt the same about the second-generation as well, though it turned out that with the right build options it could be made to work.  While the Broadcom SoC is very capable, the issue has always been software support of the hardware acceleration blocks in Linux.  Various licensing issues get in the way of a clean implementation.  I did give it a shot with a Raspberry Pi 3, but I was not terribly impressed with it’s performance even after jumping through the necessary hoops to get everything to work.  My ION-based machines seemed to work better overall.  I was never able to get enough performance to support H.265 decoding on the CPU, so I abandoned it.

Then came the Raspberry Pi 4.  It seemed like it was going to be the hot ticket, but I was too late to the party.  The post-COVID supply chain crunch made buying any Raspberry Pi a total nightmare.  I tried for a long time to get one through legitimate channels, but eventually had to give up.  I wasn’t going to spend flipper dollars on eBay for one.  Eventually, one of my back-orders got fulfilled and an RP4 turned up in the mail.  I got busy with other things and forgot about it, so it sat on a shelf for a long time until….

Leave a Comment more...

Cinnamon Workspace Switcher Labels v6

by on Feb.07, 2025, under Linux

I recently switched back to Cinnamon from MATE on one of my machines and found I needed to reapply this patch again.  I’ve updated it a bit to support all three modes of operation (workspace desktop preview, workspace labels, and the original useless numbers), selectable from the configuration dialog.  You can also specify the button width in label mode, which can be nice if you want control over the aesthetics.

This patch works for version 6.x of cinnamon:

Download it and apply it thusly on a Debian-based system:

$ cd /usr/share/cinnamon/applets/workspace-switcher@cinnamon.org
$ sudo patch -p1 < ~/Downloads/workspace-switcher@cinnamon.org.v6.patch
patching file applet.js
patching file settings-schema.json
$

Then restart cinnamon with Alt+F2 and then “r” and enter.

Leave a Comment more...

Flying an Eaglercraft Server

by on Jan.23, 2025, under Linux

This is a brain-dump of what I ended up doing to setup an “Eaglercraft Server”.  I’ve been running small Minecraft servers since around 2011.  It was just for myself and a few of my work colleagues.  I enjoyed experiencing MC coming out of beta and eventually growing up.  We ended up with two servers, the original that started in Beta1.3 became a creative server and a “newer” 1.4 server running in survival mode.  Over time, I eventually lost interest in the game but left the servers running.  Like so many kids over the past decade or so, mine inevitably got into MC in a big way and still spend time on the old servers with their friends from time to time.  I also stood up a third server for them to have a fresh start, since the old worlds are massive and had been generated using such old versions (there are some pretty weird chunk transitions in places).  One day, my youngest asked me if I could help him setup an “Eaglercraft Server”.  Boy, what a rabbit hole that turned out to be…

I am by no means an expert here, but I wanted to document what I’ve learned.  At its core, Eaglercraft is a Javascript port of the Minecraft client that can run in a web browser (I’ll leave the reason why it exists and the origin of the name as an exercise to the reader).  Since it was an unofficial port, the project came under fire by Mojang’s current owners and had to go underground to some extent.  The client is only currently available in specific versions: 1.5.2 and 1.8.8.  There is no such thing as an “Eaglercraft server”, per se.  What is actually needed is a Java Edition server configured such that:

  1. It’s preferably running in offline mode so that unauthorized users can connect…ah-hem.
  2. It’s reachable by Eaglercraft clients from a web browser, preferably via port 80 if possible.
  3. It supports version 1.8 clients.

All of these present their own sets of problems, but all are solvable thanks to the amazingly active development community around Minecraft servers.  I am assuming that the reader knows how to administrate a Linux server and has background knowledge of operating a Minecraft server.  If not, there are lots of resources out there.  My goal here was to fill in the missing big picture around what is required.

Authentication

One can, technically, login to an “online” (officially-authenticating) server from an Eaglercraft client by going through a number of convoluted steps to acquire the necessary credentials.  Each player would need to do this on their own, which can be a bit of a pain for the less-technically-inclined.

All of that can be avoided by running in offline mode and setting up an authentication server that users can bounce-through when they first connect.  There are plugins available for forked servers such as Forge, Spigot, and Paper that implement this: nLogin can provide authentication which will then use BungeeCord to tie the servers together.  The authentication server needs to support the same client version(s) as the main server and don’t forget to turn on the whitelist for both if they are going to run offline.  The first time a player connects, they will be asked to set a password, which they then have to remember.  nLogin offers tools for administrators to reset passwords, etc.

One annoying problem is the UUID generation that nLogin does will conflict with any online Java clients coming in, which then causes conflicts in the whitelist.  nLogin has some settings for this, but each has caveats.  One workaround is after adding an online player to the whitelist, look for the actual UUID they are using in the logs and manually edit the whitelist.json file to override the generated one for that player.  You’ll have to issue a “whitelist reload” command afterwards, but they should be granted access then.  The same process needs to be repeated on the main server as well, if it is running a whitelist.

Proxy

To avoid exposing old server versions to the Internet, it is wise to put everything behind a proxy.  The PaperMC project has a lightweight proxy server called Velocity that is a good candidate for this, as it supports BungeeCord and many of the required plugins including nLogin and Eaglercraft.  The main and authentication servers can then be setup to bind only to localhost, preventing direct external connections.  BungeeGuard can also be used as a further protective step.

Note that when using nLogin, the plugin is actually resident on the proxy and not the authentication server.  This can be a little bit confusing when trying to manage the plugin from the local console.  One has to connect to the proxy console, not the authentication server console.

Server

The primary server that players will play on needs to support 1.8 clients.  There are two ways to approach this:

  1. Run an actual 1.8 server.
  2. Run a more modern server with ViaVersion plugins.

The first approach is the simplest, but it doesn’t scale well thanks to the many bugs present in Minecraft 1.8.  It’s fine if playing among friends that behave themselves or that all agree to use the same exploits.  It can also be more fun and nostalgic to play this way, but it can also go horribly wrong.  At the very least, use a forked server project for this.

Otherwise, it is probably better to run a newer server and support older clients via ViaVersion.  A nice compromise might be to run a 1.12 server, which predates Update Aquatic but fixes a lot of the flaws of earlier servers.  This requires the installation of the following plugins:

  • ViaVersion – allows newer clients on older servers for those connecting from a modern Java client.
  • ViaBackwards – allows older clients on newer servers.
  • ViaRewind – expands ViaBackwards support to really old clients, including 1.8 that EaglerCraft needs.
  • ViaRewind-Legacy-Support – fixes a number of glitches and bugs that 1.8 clients would normally experience.

Once everything is setup, players will connect to the proxy which will connect them to the authentication server.  nLogin can be setup a few different ways, but again be sure to turn on the whitelist.  Once authenticated, players are connected to the main server.

Leave a Comment :, , more...

Still a Blog?

by on Jan.18, 2025, under Site

Wow, so it’s been almost a decade since I last posted here.  I’ve switched hosting situations multiple times in the intervening years and this site has suffered its fair share of bit-rot along the way.  I actually took it down at one point, as I assumed no one would notice.  It turns out there are a number of forums and whatnot that direct links to files in here, so I brought it back after receiving some frantic emails.  WordPress was hopelessly out of date and when I jumped to the latest release, it broke this ancient theme, a number of plugins, and there were also some problems with the DB due to its age (using a very old storage engine).  I thought all was lost, but I somehow got it going again.  I’m sticking to my guns on this highly dated-looking theme and I absolutely hate the new WordPress editor, but I found this plugin called “Classic Editor” that has saved the day.

So yeah, why am I even posting on here when I know no one will ever read it?  Mostly for my own posterity, I guess…and maybe the web crawlers will find these posts and show them on page 6 in their search results.  I’ve learned a valuable lesson over the past couple of decades: letting a few huge social media corporations hoover-up all of the discourse on the Internet and move it into their walled gardens has effectively ruined it.  A lot of the hobby-related discussions that had moved from email lists into Internet forums back in sepia-times have now moved into Facebook or Reddit.  Many of the forums still physically exist, but they’re mostly cobwebs and crickets and/or a fire hose of spam with maybe a few gray beards lurking here or there.  Facebook Groups are a terrible replacement for something like a forum, as it’s impossible to find anything and the same questions get asked over and over.  Alas, this is where we are now and I don’t see it ever turning around.

I stopped using Twitter about 5 years ago, as my feed slowly morphed into an AI stream of consciousness.  I was never into micro-blogging, but I used it as an RSS feed of sorts for the topics and publishers I was interested in.  Since it was no longer capable of performing that function, I started playing around with things like Mastodon…even setting up my own instance.  It kind of did what I wanted, but like so many before it (Diaspora, GNU Social, etc) it didn’t really work because no one was there generating content.  It’s the age-old problem of adoption.  That all changed in 2022 and now I find Mastodon does pretty much what I want thanks to relays and folks just plain-old posting there.  I do have a Bluesky account and that place feels a lot like Twitter did back when it was useful.  But I think we know the inevitable result will be the same once the VC money runs out.  I’ve also been playing around with Pixelfed and, thanks to the train wreck over at Reddit, Lemmy.  Something to be a bit excited about, at the very least.

4 Comments more...

MythTV and the Adesso Vista MCE Remote Control

by on Aug.16, 2023, under Computing

I am documenting here, for posterity, my journey while trying to maintain support for the Adesso Vista MCE remote control on my MythTV front ends.  Yes, I still use Myth and I believe I have had a backend running continuously since 2004, possibly earlier.  I’ve had plenty of misadventures with it over the years, but the one thorn that has always been in my side is IR remote control support.  In the early days, this was pretty clumsy as it required the use of LIRC: an out-of-tree set of drivers to support IR receivers through a serial interface.  I would usually use it with some generic universal remote control or a remote that came with a video capture card.  Once it was working, it usually worked fine for a while.  However when it came time to perform an OS upgrade, the remote would always be broken because of issues between LIRC and the kernel.  LIRC went through several generations of config changes as well, so my painstakingly-created lircrc would often not work even when I managed to get LIRC itself working.  Eventually I threw in the towel and bought a bespoke MCE remote that was supposed to “just work”.  There were a number of these available that were designed to work with Windows Media Center.  I chose…poorly.

Originally, the Adesso Vista MCE Remote Control was supported in MythTV via the mceusb LIRC driver.  Since it was being supported through LIRC, keys could easily be remapped in the usual way.  Of course, it wasn’t quite that simple as the usbhid driver always wanted to grab the device before mceusb could find it (it presents itself as a HID keyboard).  The workaround was to blacklist usbhid, which created other problems but I didn’t care about other HID devices on my frontends.  Even so, LIRC always remained frail; the drivers were a pain to port and compile, and the whole thing would still break somehow every couple of years at the next Debian upgrade.

Eventually, the mceusb driver was abandoned and the usbhid driver itself became the only way to support the Vista MCE.  It mostly worked, as the default keymap for it had been improved.  Most incorrect keys could be remapped within MythTV itself.  However for technical reasons, there are two very important keys that can only be remapped via ir-keytable triggered by udev rules: ESC (the back key) and M (to bring up the menu).  These rules did the trick:

KERNEL=="event*",SUBSYSTEM=="input",ATTRS{idVendor}=="05a4",ATTRS{idProduct}=="9881",IMPORT{program}="input_id %p"
KERNEL=="event*",SUBSYSTEM=="input",ATTRS{idVendor}=="05a4",ATTRS{idProduct}=="9881",ENV{ID_INPUT_KEYBOARD}=="1",ACTION=="add",SYMLINK="input/irremote0", RUN+="/usr/bin/ir-keytable --set-key=0x70029=KEY_BACKSPACE,0x7002a=KEY_ESC --device %N"
KERNEL=="event*",SUBSYSTEM=="input",ATTRS{idVendor}=="05a4",ATTRS{idProduct}=="9881",ENV{ID_INPUT_MOUSE}=="1",ACTION=="add",SYMLINK="input/irremote1", RUN+="/usr/bin/ir-keytable --set-key=0x90002=KEY_M --device %N"

This worked fine until one day the ir-keytable tool dropped support for the –device option for reasons that are not clear.  Sean Young claimed it did not work and was “misleading”, however there are no apparent alternatives since the remote is handled as a straight HID device and does not present itself as an rc device anywhere in the kernel.  The –device option is literally the only way to get it to work.

As with everything related to IR, the flavor of the month to support this thing has likely shifted to some other subsystem.  For now, I am pinning ir-keytable to the version from Debian 11 “Buster” (you can also use “apt-mark hold”).  This still works as of the upgrade to Debian 12 “Bookworm”, but I suspect that I will have to forward-port –device support back into the tool at some point or find some other way to reach the device.

Package: ir-keytable
Pin: release n=buster
Pin-Priority: 1000

This workaround seemed to work fine until I noticed a timing issue in udev causing the RUN command to fail in some cases where the device node had not quite been created by the kernel yet.  To work around this, the calls to ir-keytable were moved to a script that introduces an artificial delay before actually calling ir-keytable.  The udev rules were adjusted thusly:

KERNEL=="event*",SUBSYSTEM=="input",ATTRS{idVendor}=="05a4",ATTRS{idProduct}=="9881",IMPORT{program}="input_id %p"
KERNEL=="event*",SUBSYSTEM=="input",ATTRS{idVendor}=="05a4",ATTRS{idProduct}=="9881",ENV{ID_INPUT_KEYBOARD}=="1",ACTION=="add",SYMLINK="input/irremote0",RUN+="/usr/local/bin/mce-remap keyboard %N"
KERNEL=="event*",SUBSYSTEM=="input",ATTRS{idVendor}=="05a4",ATTRS{idProduct}=="9881",ENV{ID_INPUT_MOUSE}=="1",ACTION=="add",SYMLINK="input/irremote1",RUN+="/usr/local/bin/mce-remap mouse %N"

I hate this remote.  I suppose I could buy a different one, but I don’t see the point as the entire concept of a DVR is waning.  Myth’s days are numbered.

Leave a Comment more...

Solo Cross Country

by on Aug.22, 2016, under Flight Training

After several weeks of bad weather, I was finally able to fly my first cross country solo.  Flew to UES and back.  It was an uneventful flight, which is exactly what I was hoping for.  The weather was great and everything went pretty much to plan.  The only snafu was as I returned to PWK, the winds from east were strengthening and I ended up pretty far east of the airport even though I was flying the flight plan.  I was correcting my heading as I came south, but when I made the turn over Lake Zurich, I needed to correct even more and ended up getting pushed east.  I eventually figured it out.

solo-xcty-1-sidesolo-xcty-1-forwardsolo-xcty-1-back

2 Comments :, more...

Stage 2 Check

by on Aug.08, 2016, under Flight Training

After several weeks of bad weather, I took my flight school’s check for the final phase of their second training stage.  It involved taking my instructor to KUES and then practicing a diversion on the way back.  We got there fine, but I then had some uncertainty with my last checkpoint.  While looking at the chart, I got off course.  Once I figured it out, I did find the airport and landed there fine.  Lesson: don’t abandon the plan, even if you think you are off course.  On the way back, we diverted to Galt (K10C).  My course was again a bit off and I ended up west of the airport, but I did spot as we approached.  In that case, I didn’t immediately draw the course line and so my heading wasn’t very accurate to start with.  I did fix it along the way, using VOR/DME to find myself, but it wasn’t enough.  We then flew back to KPWK, which I mainly did with the VOR.

Overall it wasn’t a complete success, but my instructor was confident enough to let me fly it solo.  So I will be making the same trip to KUES as my first cross country solo.

Leave a Comment :, , more...

Night Swim

by on Jul.13, 2016, under Flight Training

I flew at night for the first time last night.  Not only was it my first time flying and landing at night, but we also did the nighttime cross country.  Sink or swim.

The cross country actually went quite well.  Had a bit of snafu when we got there as I entered the downwind for the wrong runway.  Check those instruments!  Once I fixed that, getting setup for that approach was quite an interesting experience.  I would have preferred that my first night landing happen at a familiar airport.  It’s quite hard to judge height, as one would imagine, so I landed flat.

On the way back, I flew under the hood for a while.  Then he spring lost procedures on me.  I found my position and figured out where to head, but then he started asking questions that made me second guess myself.  After a bit of a panic, I confirmed my position with the VOR.  It turned out he was trying to get me to explain my plan after I got to where I wanted to go.  I interpreted his comments as hints that I was not doing it correctly.  Sink or swim.

Once we got back to the airport, my second landing was better and the rest went fine.  We had a long debrief, and discussed the next steps.  Next we have a, “stage check”, which involves a mock check ride of sorts.  If that goes well, I’ll be doing my solo cross country flights.

Leave a Comment :, , more...

Rebust the Rust

by on Jul.12, 2016, under Flight Training

Between work ramping up to a deadline and preparing for the written test, I took a month off of flying.  I probably should have squeezed a solo in there somewhere so keep my skills up, but that didn’t happen.  I managed a 92% on the written test, and it is nice to have that behind me.  Since I hadn’t flown in a month, my instructor went up with me for a couple of laps to shake the rust off.  Made a couple of dumb mistakes with ATC, but went fine otherwise.  Flew solo after that, but the airport was so busy that I only got 3 laps in before calling it quits.

I went up again a few days later for some more practice, but again it was very busy.  It’s good to have practice with ATC procedures, but I also need to practice things like short field take offs and soft field landings.  They don’t like having small aircraft linger on the runway when they are squeezing us between small jet traffic.  I think I’ll need to head to a different airport for that.

Leave a Comment :, , more...

Search

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact me.

Sites that interest me

A few highly recommended friends...

Meta