Designing My Remote Observatory

Note: this article that was published in eight parts in 2014 in the Prairie Astronomer,  the monthly newsletter of the Prairie Astronomy Club of Lincoln, Nebraska.

By Rick Johnson

Over two years prior to moving to Minnesota I was planning on the observatory I’d build at the same time as the house. Since the club is thinking of doing this I thought it might be useful to cover what I went through doing my simpler design. Simpler in that it had to fit the needs of only one person, me. Also simpler because it was the next building over so the connections were simpler and if something went wrong (and it will) it was a short walk to it to see to the problem. Murphy will be your constant companion until you get the bugs worked out. This can take quite a while unless you have lots of experience. Many start such projects but those without the needed experience almost always fail. I had 50 years of Murphy at my side and still it took a couple months to get everything working smoothly.

My first consideration was what did I want it to do. There are three basic types of imaging I could have done; planetary, wide field deep sky of structures of large angular size (up to several degrees across) and narrow angle of objects of less than half a degree in size. Each is rather incompatible with the others. Imagers I know that do more than one also use more than one scope, camera and mount. They don’t try and make do with the wrong equipment.
Planetary work uses large aperture scopes to reduce exposure times to less than 1/60th of a second and cameras capable of taking up to 120 frames or more per second in order to capture enough frames for lucky image type processing. The idea being to capture enough frames in each color that caught those instants of perfect or near perfect seeing that, when stacked, results in a good enough signal to noise ratio that heavy processing can be applied giving a smooth result without generating false data. A less than top notch mount is fine as drifting actually helps image quality when processed correctly or you can move a top notch mount slowly to create this effect. Camera’s and focal length need be such that each pixel is about 0.1” to 0.2” of arc. This spreads light thin meaning a 12” or larger scope is best for fast rotating planets like Jupiter to get enough frames before its rotation blurs the image and short enough exposure to freeze seeing. Again, there are ways that processing can “unrevolve” the planet but they can only do so for a few minutes time in the case of Jupiter and can’t help with moving lunar terminator shadows. Cost of the camera is relatively low (good ones start at only a few hundred dollars) but large aperture scopes and mounts sufficient for their weight are costly.

Wide angle deep sky imaging needs a deep sky camera. These require regulated cooling and are designed for long exposures of up to 30 minutes or longer. They take several seconds or longer (mine takes 28 seconds) to download an image to keep noise to a minimum and often can’t take exposures faster than 0.1 seconds and even those will expose the center more than the edge due to how the shutter works. Most are a poor choice for planetary work. While some are relatively inexpensive ($2,000 new) they use a small chip meaning they take in a rather small field of view unless used on very small scopes (say of 400mm to 600mm). APO refractors are best for such cameras, somewhat negating the camera’s low cost. Since the image scale will be low (probably 3” of arc or more per pixel) such systems don’t require top notch mounts and are rather immune to seeing conditions meaning they can be used on nights too poor for planetary or narrow angle work. For beginners this is a good place to start as it puts a lot less pressure on getting every last detail right. Processing such images, especially near light pollution takes a lot of skill however due to gradients the light pollution adds. This does also put great stress on the quality of your calibration images. Something I find most beginners have big trouble accomplishing judging by the 30 or so questions a month I get from beginners unable to do usable calibration, especially under light polluted skies. This type of imaging is helped in many cases by the use of narrow band filters (H alpha of 6nm or less and OIII and SII filters of 3nm band width. This can add over $1000 to the cost of the system but makes imaging from in or near towns of emission nebula much easier — IF you can guide accurately for 30 minutes at a time.

Narrow field imaging puts heavy demands on the quality of the optics, and mount. Tracking for 10 minutes to 30 minutes needs to be accurate over that time to a small fraction of a second of arc. Far from what most beginners can achieve with typical mounts. Such imaging is best with large aperture scopes. This too puts heavy demands on the mount.

Since the field of view is very small pointing accuracy that would put the object in a visual 1 degree field is far too poor to find much of anything. A mount that can point to one or two minutes of arc or better is a requirement. Though the mounts capable of such pointing are also the ones capable of high tracking accuracy so meet one need and you likely meet the other. Since the focal length of such systems is rather large (3650mm in my case) you need a large imaging chip or you get only a soda straw view and miss the setting your target lies in.

Unfortunately such cameras start at about $8,000 new and use 2” or larger (65mm square) filters which are far more expensive than 1.25” round filters. Add narrow band filters for some planetary nebulae and small emission nebulae and watch the filter cost go over $4000 depending on size and bandwidth (narrower is better but really expensive). Since I had a lot of imaging experience and had taken many wide field images and wasn’t much interested in planetary work I decided I was ready for narrow field imaging so planned the observatory accordingly.

My first decision was the mount. Most imagers will tell you it is by far the most important piece of equipment you will ever buy for this hobby.

I strongly agree. A good mount will often eat up half your equipment budget. At the time there were only a few to choose from that met my needs. While several offered the weight and tracking abilities I needed few were truly robotic mounts designed to be woken up from afar. In fact I found only two manufacturers with mounts that met my requirements.

AstroPhysics 1200 and the Paramount ME (just announced as an upgrade from their Paramount GT-11000S). Others have since entered the market and the AP 1200 replaced with an even better mount but that’s true of the Paramount ME as well. The AP was several thousand of dollars cheaper but wasn’t as fully robotic as the Paramount. It required syncing to get its bearings each time it was turned on. It couldn’t easily track asteroids and comets with a single click of the mouse but needed special ASCOM software and lots of manual intervention I wanted to avoid. Many use it in remote observatories but, at the time, nothing AP made had through the mount cables (they now have such mounts). When running remote I can’t see if a cable is about to snag. I have to tie all up securely yet allow full mount motion and hope one doesn’t come loose or a mouse gnaw through a tie etc. The through the mount cabling of the Paramount meant no worries about cables and pretty well sealed the deal.

Also since it was newly announced they were offering a price break of several thousand of dollars. That made it not all that more expensive than the AP and included about $1000 of software all of which I’d have to buy with the AP mount further making it a simple decision. Though I was still two years away from even starting construction I put in a down payment to lock in the introduction price. It was backordered for 18 months I was told. It came in 11 as it turned out. Still it would be another two years before I had an observatory compete enough to test it out. When it came the Fed Ex driver was a little gal under 5 foot tall (she sat on two pillows to drive the truck) and the truck couldn’t go up our steep drive in Lincoln without tearing up the concrete. Shipping weight was about 200 pounds so I ended up having to get it out of the truck (no lift) and onto a cart to carry it up the drive and into the garage. My back has never been the same. I should have told them to get a driver that could do the lifting but new mount and adrenalin flowing I somehow managed it. Being 10 years younger helped a lot! The mount sat in the garage in Lincoln a year then in the garage here for a year before I could even test it out. When I did I found it in perfect condition and it remains so 8 years of heavy use later.

New cameras and telescopes (OTA for Optical Tube Assembly) were hitting the market constantly so I wasn’t ready to buy this far in advance. That turned out to be good because I had a ton of learning to do before I’d be ready to make an intelligent choice for either. I knew film work but had no concept of what was involved in digital imaging. I spent the next year going to “CCD university”. I found it was quite important to understand the theory of how CCD imaging worked in order to make an intelligent choice of scope and camera. I get a lot of email from folks who didn’t do their homework and now are trying to make their bad decisions work and hope I have a magic wand to make it all right. I don’t.

It turns out the scope and camera are so interrelated you it is best to consider them as one. A camera good for one scope may be lousy for another scope or vise versa. They need be matched rather closely if high resolution work is the goal. I also had to learn at least the theory of processing digital images. What it takes to get accurate calibration (varies with cameras) and how to pull the detail out of the numbers collected. All images start as a number between 0 and 65535. Turning those numbers into an image is easy, turning them into a good image is far from easy.

I’ll get into what I found I needed to learn just to make a good match of scope and camera in the next installment. It was far more than I ever expected. I’ll get to the actual construction later. That turned out to be the easy part and depended somewhat on the scope and camera combination I chose.

Matching the Camera and Telescope

After doing only a small amount of research I realized that to do the high resolution work I wanted to do I’d need to consider the camera and telescope as a whole. First off you have to forget much you learned about visual astronomy. Power is meaningless to CCD work. What matters is to try and capture resolution equal to the best long term seeing at your sight. Visually you might get instants of very fine resolution but then the star gets fuzzy then sharp again or it moves one way or another like seeing a stone under running water of a stream. Since your shutter will be open for many minutes it will catch both the star in and out of focus as well as its motion around the field. All this reduces your resolution. A typical location (not atop one of the premier observing mountains) has resolution of about 2.5” of arc on a typical night though it can sometimes dip below 2” a few nights of the year. Others will be far worse. Testing my location showed 2.5” was typical of my location as well.

Sampling theory for high contrast features says you need to sample as twice the rate of what you are sampling. That would mean for those best nights you’d want to sample at 1” per pixel for on a 2” night. WRONG! That doesn’t apply here as the Airy disk is round but the camera’s pixels are square! A bit of drawing on graph paper will show you that this sampling rate is too low. It needs to be 3 to 3.5 times that of what you are sampling, so for a 2” night you would want to sample at 0.67” to 0.57” per pixel.

I had my sampling rate. I’d need to use a camera that, on whatever scope I bought would resolve 0.67” or better. Now comes the mathematics. You can determine what focal length is needed for a given pixel size to achieve a given resolution with the simple formula Fl=206 x pixel_size / resolution in seconds of arc. Fl in mm and pixel size in microns. Since I wanted a resolution of about 0.6” the formula became Fl=343 x Pixel_size. While telescopes came in all sorts of focal lengths camera pixel size is more limited. At that time Kodak was the main supplier of amateur chips and it made cameras with 9 and 6.8 micron pixels and one with rectangular pixels of 23×27 microns. A few small Sony chips were coming to market sold by an English company but otherwise this was about all I had to choose from in 2004. At the time the Sony chips were pretty unknown so that left the Kodak chips. Most Sony chips used rectangular pixels which added a processing step to square them up. There were some exotic (read expensive) chips from Fairchild and other sources but they had 12 micron or larger pixels.

Applying the formula I found that for pixel sizes of 23, 12, 9 and 6.8 microns I needed a scope with a focal length of 7889mm, 4112mm, 3087mm, and 2332mm. Those are some rather long focal lengths, especially the first two! Fortunately the last two were possible. Since long focal length means a rather small field of view for a given size chip I’d want as large of a chip as I could afford or was made. That boiled down to 4 chips.  Two at 9 microns, the KAF 1600 that was small at 13.8×9.2mm and the KAF 6303 just on the market with a size of 27.65×18.48mm, 4 times larger in area.  At 6.8 micron there was the KAF 3200 and 14.9 x 10mm in size and finally the KAI 11000 which has a 9 micron pixel and is far larger at 36 x 24.7mm.  The latter is the same size as a standard 35mm film frame.

There’s a lot more to picking a imaging chip than these simple factors. Like everything in this hobby there are lots of compromises to be made. Factors to consider are read noise, dark current, full well capacity and blooming and quantum efficiency to name the more important ones. Large read noise means longer sub exposures and less dynamic range (brightest to dimmest things it can record in one exposure). High dark current means you need more cooling and will likely have more hot pixels to deal with. Large full well capacity increases dynamic range. Read noise reduces it. Some chips have anti blooming gates to eliminate the need to process out blooms (overflowing of pixel wells) but these reduce quantum efficiency. Quantum efficiency is the percent of photons that get recorded. This ranged from below 40% to over 85% depending on frequency and the chip.

With these issues in mind it was time to consider a telescope. At a 0.6” pixel, aperture is pretty much the sole determinant of how faint I could go and how fast I could get there. So larger is better. A 10” scope puts 4 times as many photons into a 0.6” pixel than does a 5” scope. Notice f ratio IS NOT INVOLVED! No matter what the f ratio of the scope at a given aperture and pixel resolution (0.6” in this case) only aperture is involved. So in choosing a scope the f ratio was not important, only that I could achieve the resolution I wanted. For such long focal lengths and large aperture I would need a reflector. Since balance is an issue with a Newtonian with a heavy camera hanging on the side, not to mention cables that could snag simply because I couldn’t see what they were doing as the Paramount’s cabling system was designed for cameras in the back, I quickly decided on a Cassegrain design as necessary. Best would be an RC type but those were out of my price range. Vixon made a modified Dall Kirkham that had a pretty good field of view but its focal length was a poor match to these cameras and users complained greatly about its very thick spider and its square attachment to the secondary causing stars to look square. I quickly bypassed it as well. That left Schmidt Cassegrain scopes. Those however have a strongly curved field of view so needed a corrector to flatten the field — So does the RC with large fields like I wanted. These for SCTs only also reduced the focal length from f/10 to f/6.3. SCTs came in sizes up to 14” that I could afford. A 14” f/10 at f/6.3 with the KAF 3200 chip would give a 0.62” pixel. No other combination worked. So that decided it, I thought. Before I moved and started construction Meade announced a new design with a flat field needing no corrector that worked at f/10. The 12” using a 9 micron chip would give almost exactly the 0.6” pixel I wanted. That meant either the KAF 6303 or KAI 11000. I’d rule out the KAF 1600 and 3200 as too small. The KAF6303 had a lot going for it. It matched the field of the 12” LX200r, Had good dynamic range and pretty good QE. Its only drawback was severe blooms. So I looked to ordering the 12” LX200R. Oops, it only came on their mount which was far inferior to the one I already had. An OTA version was announced. But I was still in Lincoln so it likely would be out by the time I would be ready for it.

Time to consider which manufacturer of a camera with the KAF 6303 I would go with. There were about 7 at the time. 4 of those are now history. I felt that was the case and only looked to the three that seemed like they were sound. FLI, Apogee and SBIG. The SBIG version included a filter wheel inside the camera while the others used an external filter wheel. SBIG’s camera also included a guider chip while the others needed a separate guider. Since a separate guider goes ahead of filters that means far more guide stars to choose from, especially if using narrow band filters. Also the internal wheel held 5 filters not the 7 or 8 needed for full coverage. Total cost was similar, a bit higher for the 7/8 filter system. I left that open for now.

The house was now under construction and the observatory pier in. The observatory itself was under construction when we moved into the house in 2005. I still only had a mount. By March 2006 the building was done but had no roof, no scope had been purchased as the LX200R series was still only available with the mount. Turns out this was fortunate but I didn’t see it that way at the time. I even considered buying it and trying to sell the mount alone but it was so bad for imaging from all reports I wouldn’t want to even give it away. Nor did I have a camera.

Turns out there was still more to consider about that, OSC or Mono. OSC had just been announced. That saves filters and filter wheel costs. Sounds good, but is it? The KAI 11000 came in an OSC version but not the KAF 6303.

OSC was new, I needed to do more research into this as I now had even more options. How to decide between them? Though the observatory was now being built (but for the roof) I still didn’t know which camera to get for the 12” LX200R. Even after two years I still needed more research. This was turning out to be much more complicated than I thought when I started. How that changed things I’ll cover next month. I’ll also cover why no roof which is another snag I didn’t expect.

Camera, Roof and other Issues

OSC (One Shot Color) sounded like the greatest thing since achromatic optics but was it really?  Gather all data at once, no filters or filter wheel to buy so much cheaper and to read the ads it almost made your morning coffee.  I had been in contact with an amateur about a 70 minute drive from here that was a top notch imager who made a good income selling his images.  The maker of the first OSC cameras using Sony CCDs had given him one to test.  He also had a similar Sony mono camera.  Different chip but otherwise similar.  I drove down to learn from the expert.  What sounded great turned out not to be so hot after all.  The QE was much lower.  It took him nearly twice as long to get a good image with the OSC camera due to this.  The dye filters blocked much of the light that mono cameras dielectric filters passed.  The camera he had used CYM filters which were more efficient than RGB but not enough so.  They made processing more difficult and photometry or narrow band imaging nearly impossible.  He found one use for it however.  He could save imaging time by using two complete systems simultaneously, one gathering mono data with the mono camera while another telescope with the OSC camera gathered the color data.  Watching him work was like watching that guy in the circus keeping all those spinning plates going.  Also it doubled the cost.  With one scope I’d have to stick to mono.  After playing with his data for a few days and measuring the noise levels and color accuracy I realized OSC wasn’t what many think it to be.  They have an advantage with moving objects like bright comets but that’s about it.  Might as well go with a DSLR for that and save a lot of money.

By now I had an observatory but no roof.  I need to backtrack on this.  I had planned on a clamshell design that was being designed for me for free by the head architectural engineer for 3M.  They have a huge facility on the lake and want to keep it natural as possible.  They didn’t want my observatory to mess that up so he was volunteered to do the design work and their crew would build most of it for materials cost.  But he never seemed to get moving.  Promised dates passed.  Finally he admitted he didn’t know how to do it!  Now I had a building, snow coming and no roof.  A roll off roof over 16’ in the air seemed nearly impossible but maybe that would have to do.  When my carpenter needed a 4th hand he’d hire this rather kooky guy who was rather rattle brained but followed direction well.  He said his dad could do it.  Not if he was like this son.  But my carpenter said that might be a good idea.  I went to see him.  He was as laid back as the son was hyper.  Talked like a 78 record at 33 rpm.  Drove me nuts.  But everyone said he was a mechanical genius.  I told him that it couldn’t use tie downs yet needed to be secure in high winds.  No way I wanted to go out at –40° to open up and close down as we do at Hyde.  He said he’d think about it.  A week later he had a design that did everything I wanted and a source of all materials.  Seems my timing was perfect.  The local potato plant just got the contract to provide about 80% of the French fries used by McDonald’s.  But to get the contract all their equipment had to be aluminum or stainless steel.  The plant’s galvanized steel production lines had to be totally replaced.  This fellow had the contract to keep up the plants huge parking lot for snow, cleaning and repairs to the surface, hundreds of semi trucks a day do a number on it.  He was able to get everything needed at scrap steel prices from their scrapped production lines.  My roof now rolls on rollers that used to move tons of potatoes about the plant.  All the parts needed to keep the roof firmly tied down to the track and yet roll were already made!  A bit of modification and the roof would be ready.  But he also has the snow removal and street cleaning duties for several small towns.  We had a very snowy year that year and it all went into the observatory (no mount) as he had no time to do the roof.  Finally in March it was done.  Thanks to the rollers it rolled with ease using just one finger.  Though it is powered by a modified garage door opener and can be opened from the house using the car remote.  (I later learned to keep that in a drawer when a cat walked across it opening the roof during a rain storm!)  Now I had a roof (after a ton of shoveling as the snow was 4 feet deep in the observatory).  It’s no fun shoveling it up and out over 5.5 foot walls!  But no scope, it was still only available on their insufficient mount and I’d not decided on the camera.

After talking with my friend with the OSC cameras and watch him work I decided it might be a good idea to learn the system with a much shorter focal length that was far more forgiving to guiding and pointing errors.  Besides, he said it would be foolish to start with such a long focal length.  So I mounted my 6” f/4 on the Paramount ME and borrowed a very well used ST-7 from that imager and started to learn how to run the observatory using the software that came with the mount which included camera control software.  At this point, to keep things simple I worked from in the observatory, it was a warm spring, rather than networked to the house.  Anything to simplify things.  Of course I did everything the hard way at first but got good results (good to me at the time now I consider them rather poor) right off by using a pixel size of 3” rather than the 0.6” I was shooting for.  I was working in mono only, again to keep things simple.  I started with short exposures and no guiding.  I added guiding and longer exposures as I got more confident in the system.   Besides, I hadn’t determined how to best control things from the house.  I quickly discovered a major problem — midge flies!  Millions were attracted to the computer screen.  They live only one night so they died by the thousands on the screen falling into the keyboard which after two nights was completely unusable.  I was dead until I could get in a new one.  I’d have to move to the house and fast, but how?  I could set up a small computer in the observatory and use a network between it and a computer in the house.  This seemed the most obvious except temperatures in winter hit –40° and colder.  Hard drives don’t boot at such temperatures.  Today I might have uses a solid state drive but that option didn’t exist in 2006.  I could build an insulated compartment for the computer and put in a simple heating system.  I’d have to turn that on hours before I planned to use the observatory which meant leaving the heater on much of  the time for even bad weather can clear unexpectedly.  Thinking that the way I’d go I had the two cat 5 lines run from the house to the observatory when it was built so that part was ready to go.  Then Mark Dahmke gave me the suggestion of using a simple device that converted USB 1.1 to a signal that could be carried by cat 5 wiring then reconverted to USB 1.1 at the other end.  This was only $100.  All my gear was USB 1.1.  Nothing at the time used USB 2.  It seemed a quick and dirty way to get going so I ordered one.  It and the keyboard arrived the same day.  Quick keyboard replacement in the laptop and I was ready to try it out — in the observatory.  I needed to first see if it would work.  So I ran the USB line to the wall.  Converted to Cat 5 and sent it to the house.  There a short cable ran it back into the other line and back out to the observatory.  Now I plugged that into the converter to go back to USB and that into the computer and — IT WORKED.  Quickly I shut down before many midge flies showed up.  I went into the house as it certainly would work there.

I plugged into the Cat 5 cable at the house, converted back to USB and fired it up.  It failed!  Couldn’t see the observatory at all.  The Cat 5 was installed by my electricians who we quickly learned to call “Dumb and Dumber”.   I put in two lines to the house just to have the second for expansion as doing so later would be difficult with walls and basement ceiling in place.  On a hunch I plugged the house side into the other cable position.  Yep it now worked.  Dumb or Dumber had reversed the cable wiring.  The top socket in the observatory was the lower one in the house. Not only that I later found the wiring for the Cat 5 through the house often had crossed wires so none of it worked until I rewired it all.  Some wires that appeared correct were broken in the insulation so the circuit was open.  Dumb and Dumber indeed!   I now can run the observatory from anywhere in the house.I was quickly learning another lesson.  You need proper software to process these images and that isn’t cheap.  I was trying to get by with free software that came with the camera and mount.  While it could take and calibrate the images it fell woefully short for everything else it claimed to do (not much) and if I was going to get much out of the images I’d need software that worked with the full 16 bit range of the FITs images rather than the limited 8 bit range of most image processing software out there but for Photoshop at the time.  This was quickly turning into a black hole for money.   One package did a great job of stacking images and rejecting noise but couldn’t handle stacking images that varied in image scale.  That took another expensive program.  Then, even if I bought Photoshop it couldn’t read FIT files and the free software didn’t make good TIF files from the FITS.  While that software wasn’t expensive it so took over the computer it was a major headache.  Later NASA published FITSLiberator that solved this for free.  Still assembling the required software was a major headache and several thousands of dollars later I finally had software that could do the job.  Today there are ways to cut that in half but they didn’t exist then.  Also I found my computer woefully inadequate for the severe CPU needs of noise rejection stacking.  That could take 30 minutes a stack and it would take many tries to find the one best method for this particular stack   And that was with a rather small imaging chip.  The one I intended to use was many times larger and would give new meaning to SLOW.  Also these programs would need 16 gigabytes of RAM with the larger chip size.  I’d thought my computer pretty good.  Not for this chore!  I also found just learning to do image processing was far more difficult than I had imagined.  No matter the software the learning curve is very steep.  I’m still climbing it.  Sometimes I think I see the top then it recedes back into the fog of ignorance again.  From email I get I must make it look simple in my observatory mailing list.  It isn’t as the ton of very poorly processed images on the net will attest.

With OSC off the table I was back to the 12” LX200R and  a camera using the KAI-11000 chip.   Though the KAF-6303 was also an option I was starting to discount it.  But I’d still not decided on the camera maker and the scope was still only sold with their mount.  I’ll cover the issues of deciding which maker of the camera I’d go with and why I ended up with a different scope next month.  Also software options have increased greatly since 2006 though the cost is still astronomical to say the least.  I’ll try and cover such changes in a later installment of this saga.  Also hardware (scope and camera) has expended greatly as well.  That too is for later on.  Other issues I had to solve include how to focus and take calibration images remotely.   And then there was the issue of how to turn on only the gear I needed that night.  Lots of little things you don’t think about until you realize you needed to when you can’t do something that’s necessary.

The above graph shows the quantum efficiency of the OSC version compared to the mono version of the KAI-11002.  (The KAI-11000 has been replaced with the KAI-11002.)  Note that only one fourth the pixels even see blue and red light with half seeing the small portion of green.  Add the areas under the three colors together and they are about half the area under the mono version.  The RGB filters used in mono imaging have nearly vertical cutoffs; all light of their color is collected not just some of it.  Then too all pixels see each color so red and blue (by far the most important colors) collect at about 8 times the rate they do with OSC.  Also note they do see some IR light which means that light must be filtered out or color balance will skew.   This explains why OSC is actually slower than mono imaging.  But far cheaper with only one filter to buy and no filter wheel to buy.  Mono camera color filters block all UV and IR light so no separate filter is needed.  DSLRs already have the IR block filter and is cheaper than a similar sized CCD OSC camera.   But their IR filter also blocks H alpha light, important to astro imaging so an entire industry for replacing the filter has sprung up.

The Scope and Camera Issue Resolved

While I now had a working observatory with the 6” f/4 it was far from what I wanted.  The low resolution of 3” per pixel covered a multitude of sins that I couldn’t survive at 0.6”.  Focusing was one issue.  To focus remotely you need an electric focuser.  And not just any one.  I had co-opted the one from my visual scope that I used for solar imaging with a video camera and put it on the 6”.  The video camera was very light compared to the CCD which was light compared to the larger ones I was considering.  While it cost over $100 it did slip under the weight of the CCD.  It may have under the video camera but since that was focused manually it didn’t matter.  Nor did it matter that it was powered only by a DC motor that moved about the same each time it was activated.  But to focus remotely the focuser needs to be consistent and never slip.  A much better way of focusing would be needed.

At the time most imagers used (and still do) use a free program called Focus Max.  It works by making many calibration runs taking an image as it moves the focuser in known steps from out of focus through focus and beyond noting how the out of focus disk changed in size with each step.  From many such runs it could calculate where the exact focus was but only if the motor driving the focuser always moved focus exactly the same each time.  Any variation and it would fail.  Since the cameras I was looking at were among the heaviest made I’d need a heavy duty focuser system.

Oddly one option was rather “cheap.” If the mirror in an SCT is left unlocked another company made a unit that would use the focuser of the SCT along with a stepper motor that always approached focus from the same direction so backlash was eliminated.  It with Focus Max could calculate the correct focus position from a couple trial images of a fraction of a second then move to that exact position.  While Focus Max was free the control and stepper motor for the unit wasn’t.  That was $500, more money into the black hole this was turning into but when I looked up the cost of an external focuser that met the requirements (the one that came with the LX200R didn’t) the cost was well over $1000.  I figured this less than half the cost option best (Yep, I was wrong).  Not yet having a scope I filed this information away and went about looking for which brand of camera, Apogee, FLI or SBIG.

SBIG seemed to have a neat idea.  It was the only one the included a guiding system built in.  It had a second chip that looked just over the top of the imaging chip that could guide the scope while the other chip was taking the image and do so with one power cord and one USB cord rather than two sets of USB and power cords needed for the other cameras.  Some, less sensitive guide cameras were self powered over the USB saving one cord.  The in camera guider had another advantage.  With the two in the same unit any possibility of flexing between the two systems was eliminated.  A separate guide system has to be super rigid or the guiding at 0.6” would fail.  This system assured rigidity.  Sounds great but…

There’s always that dang but butting in.  The guide chip was very red sensitive and rather blind to blue light.  It worked through the color filters when doing filtered frames needed for color.  Those blocked one third the light.  Since it was highly red sensitive that wasn’t bad for the red filter but horrible for the blue filter.  In fact, talking to users of the cameras all mentioned that since the majority of stars are red dwarfs finding a sufficiently bright star to use for guiding when taking the blue frames could be extremely difficult.  Since the chip was mounted rigidly you could only use the stars it saw.  To find a good guide star the entire camera would need to be rotated and often the target placed well off center.  Rotation meant the object wouldn’t necessarily be framed well and then being off center would just add to the problem.  Like OSC this system was starting to look less than it first appeared.  After talking with many users of all three cameras I decided most FLI users were highly satisfied with Apogee users not all that far behind but many SBIG users who paid extra for that internal guider had found they needed the same off axis guider and guide camera used by the other two cameras.  In fact, today, SBIG has abandoned this approach and no longer sells cameras with the guide chip in them but for one model.  While I hadn’t decided between the low QE STL-11000 which didn’t bloom and the STF-6303 that did I was going to go with FLI.  That meant I needed to shop for a good off axis guider and guide camera.  More research was needed.  But before I got very far a couple unforeseen events changed my plans entirely.

About this time Meade announced that their 14” and 16” LX200Rs were now available in OTA versions (no mount) but it would be another year before the 12” would be available.  Wait a year or go with the 14”?  A few calculations said it would give a 0.5” pixel rather than 0.62” but that difference wasn’t all that great so I placed an order with the only company that had one without a two month or longer backlog.  It came in a week.  But no camera.  The small chip of the ST-7 just didn’t work at that focal length so I continued to work with the 6” f/4 learning more about image processing and how to use the abilities of the software which was very complicated to this old guy’s brain.   This was a good thing as I’d have been in over my head with the 14″ and my state of ignorance.

Now which camera, the far more efficient 6303 or the non blooming but slow 11000?  That was quickly answered when another imager I knew offered me his 11000 with top filters (for that time) at a price I couldn’t refuse.  I figured while the camera was slower I was learning how to automate the imaging so time wasn’t as critical as I’d originally thought.  I could be doing other things while the image was taken.  Dealing with blooms of the ST-7 was a pain (software is better today).  Why was this fellow selling?  He didn’t like the lower QE and was moving to the 6303!  He was going to do mostly narrow band imaging so didn’t need the LRGB filters and was getting a narrower H alpha filter to better work during a full moon.   And yes it was an FLI he was getting.  Now I had all the pieces, I thought.  The 14” came with an electric focuser I could control from the house when the mirror was “locked”.  This sounded great.  As I’ve already discussed it didn’t turn out that way.  The 6” came off and the 14” went on.  The ST-7 came off and went back to its owner (temporarily) and the STL-11000XM went on.  A check from the house showed all was working great.  I was in business.  Or so I thought.  I didn’t know some things I’ve already told you.  But I was going to discover them the hard way and quickly.

One I haven’t mentioned is depth of focus.  At f/4 the math says to have good focus I must position the CCD within 48 microns of the right position (less than half the width of a human hair.  At f/10 which the new scope ran I had plenty of leeway as the zone was 287 microns or 6 times greater.  That should be easy compared to the 6” scope.  But the 6” scope was working at one sixth the resolution!  That was hiding my focus errors.  Now they were out in view and hitting focus was nearly impossible.  While the software would send a short pulse to the DC focus motor it would move various distances each pulse so hitting focus was a hit or miss affair.  Also it meant the CCD frame which is far larger than the one I had been using had to sit square to the optical axis to within 143.5 microns or a bit more than a human hair.  The Meade focuser fell down here as well.  It wasn’t strong enough for the weight of the camera and would sag.  How much it sagged depended on where the scope was pointed but there was no way to hold the camera at right angles to the optical axis and move the scope.  This issue was made worse because there was no rigid connection between the focuser and camera.  You just slid in the 2” draw tube from the camera and hoped it stayed square but under the weight that didn’t happen either.  Also the now 1.7” clear aperture vignetted the image.  The focuser might work with small cameras but it was worthless for the STL-11000XM I had.

Also at the time I’d not heard of Focus Max I mentioned earlier.  I was trying to use the focus routine in CCDSoft and a couple others but all suffered from finding good focus unless seeing was perfect.  Then, with 10 minutes work, I could find a good focus.  But the scope was very temperature sensitive.  I was finding even if I did stabilize the focuser for a bit a temperature change of even 1°C would take the image out of focus so it was again 10 minutes of struggle to refocus.  Change filters and you had to refocus as well.  Focusing was turning into a nightmare.

Consulting the imager that loaned me the ST-7 I was told about RoboFocus and Focus Max.  That worked well with the SCT’s internal moving mirror focuser I was told.  But that still left me with a camera that sagged as the scope moved around the sky.  I needed a better connection system.  More research and more money.  For only about $50 there was a coupling that did rigidly mount the camera to the scope that was 2” internal diameter but I’d already found I was getting vignetting in the corners that was rather severe.  Other users of the camera all told me I needed at least 3” couplings to assure all the light the scope could provide was hitting the chip.  Those weren’t $50!  In fact, at the time I only found one provider and they wanted about $300.

I had more homework to do!  In the meantime I started to learn mono imaging at 0.5” per pixel.  That was a whole new ballgame from imaging at 3” per pixel.  If I thought it was hard at 3” it seems the difficulty goes up by the square of the difference so was now about 36 times more difficult.  In fact the 6” on the Paramount ME hardly needed guiding corrections as my errors were hidden by its low resolution.  I could take 5 minute images without guiding in fact.  Not so with the new scope!  I was going to need some time to master this.  Also, while only the centers of the image I was taking were usable due to the focuser sag issue processing these higher resolution images was also a whole new ballgame.  So I had plenty to work on while I solved the issue of how to focus and hold the camera rigid at the same time.  I had an oddball idea in mind but needed time to figure out if it would work and then implement it.

In the meantime I’d started to email out some of the “better” 6” f/4 images with the ST-7 and had a lot that hadn’t been even processed.  So while I worked out issues with the 14” I continued sending those out.  I find them very embarrassing today but at the time they seemed a lot better than they really were.  Also while I covered some of the issues I was fighting many I never mentioned so those who got those early emails will be hearing (and have heard) some things I never talked about before, at least in any detail.

Next month I’ll cover the oddball way I ended up solving the guiding, camera rigidity and focus issues.  Yes it was a major black hole for my bank account but has worked out very well even though as best as I can determine I was the only person in the world using this solution routinely at the time.  It is starting to catch on with top mounts and software today.

Above: 14” system as first configured with the inadequate Meade focuser.  


Guiding and Focus Issues Resolved

I needed more rigidity, a better means of focusing and guiding through filters.  While I could guide well through the luminance filter, guide stars were few and far between in blue light and invisible through the H alpha filter.  The software with the Paramount ME included Tpoint and an option to invoke “Pro Track”.  Tpoint was adapted from software used to control the Gemini 8 Meter telescopes.  By mapping points in the alt-azimuth sky the software learns where the scope really points when it thinks it is pointing to a specific alt-azimuth coordinate.  While I enter right ascension and declination coordinates those are converted to alt-azimuth positions for pointing the scope even with it polar aligned.  When Pro Track is enabled the computer then can use this map to accurately point the telescope as it tracks across the sky.  It knows, based on the map, how gear errors and various gravity induced sagging as well as atmospheric refraction alters the pointing of the scope and can compensate for it.  How well it does this depends on how accurate the map is and how close together the mapped points are.  The mount had virtually no periodic error right out of the box being only about +/- 1.3” of arc.  Typical imaging mounts have this much error after periodic error control but the Paramount had it before it was applied.  After periodic error compensation that was reduced to where it was so small seeing induced errors were covering up any remaining error.  Combining these two concepts gave me the idea that if the camera could be mounted solidly enough then guiding wouldn’t even be necessary.

Before laying out big bucks I borrowed a 3” adapter that screwed onto the scope and into the camera providing rigid mounting without any sag or flex issues.  Then I tried imaging without guiding after making a very dense Tpoint map of the small area of the sky I was using for testing this concept.  It was only about 2 fields tall and three hours long but included 60 mapped points.  I then took a series of 10 minute images for three hours along this band.  I then stacked them without alignment.  The result was each star was repeated 4 times with virtually no visible trail between them.  Total error was about 6” of arc.  Why was it jumping like that?  I had taken the image with the Meade provided “mirror lock” on.  The motion was in a slight arc indicating gravity was to blame as each movement was toward the center of the earth.  I tried again with mirror lock off.  Now the trail was continuous with only minor hitches in the smooth arc.  This told me the mirror was only sort of locked.  Under gravity it held then would suddenly release and hold again with the cycle repeating at irregular intervals.  My nice idea wasn’t going to work with a moving mirror scope.  Besides if I was to use RoboFocus to control the SCT focus the mirror had to be unlocked.  I’d need a guider. 

But another imager who used to have an older Meade SCT before mirror lock said he could lock the mirror rigidly in place as he had done with his.  Fine but how do you focus?  Then you use a far better focuser than Meade’s to control focus.  He used a focuser by a one man company in Colorado called Van Slyke Engineering.  It was rock solid and judging by the price made of pure titanium reinforced gold.  It was 3.1” so no vignetting and RoboFocus compatible.  A sampling of other focusers showed they were often more expensive and had far less focus travel.  With the mirror locked temperature changes move focus a long ways so the range would be needed.

One issue I had with the old Meade setup was that when I wanted to use the scope visually removing the camera always resulted in dust motes changing on the various filters. That meant I had to take new flats every time I used the scope visually.  That was a major pain and time waster.  Van Slyke offered a solution for that.  He made a multi port device that allowed the image to go straight through to the camera or a diagonal could be inserted to send the image to one of two side ports.  A third port could be used by a guide camera to guide before the filters.  This solved two problems.  Visual observing without need for new flats and a way to guide without loss from the filters.  For that though I needed another camera.  Also the price of both the focuser and the multi-port device was over half the cost of the 14” scope!  No one else made such a device.

I was thinking of photometric work though my camera being a non blooming one wasn’t linear over its entire range.  In fact tests showed it was linear for only about the first 40% of its range.  This would be a problem for good photometric work.  The ST-7 I’d borrowed was a very linear camera perfect for photometric work.  It also can function as a guide camera.  I asked him what he wanted for it and got the first reasonable price I’d heard so snapped it up.  I could only afford the focuser at the website’s prices.  I called Mr. Van Slyke to order the focuser and attachments to mate it to the camera and scope.  He said most also got the multi-port unit as well.  I told him the two were way beyond budget.  Turned out he haggles and I eventually got both for only about 10% more than the focuser alone.  Still the black hole was getting larger by the minute and I still had to order the RoboFocus unit to run it.  Soon I had the gear attached to the scope and ready to test out.  Amazingly it all installed and interfaced with my other software without a hitch.  The focuser was very precise and could hit a focus position every time within 4 microns.  That was its default step size and worked well with my system so I didn’t change it.  But the dang mirror would move forward and back as well as side to side making all this precision moot.

A call went out to fix the mirror.  He drove down (lives in Canada) and soon had the back of my scope in pieces.  He found the mirror lock was not assembled right or had jarred apart in shipment.  He said even correct it would move so we went ahead and fixed its position and reassembled everything.  That meant collimation was all wrong and had to be corrected that night.   I was done right.  Nope finding the correct locked position for winter and summer was a hit and miss affair.  His guess was close but I found at –25C or colder it wouldn’t go out far enough and in the heat of summer it just went in far enough.  Van Slyke made a quarter inch extension which was just right to allow it to work in winter.  It too seemed to be made of solid gold from the price tag.  He didn’t haggle on it either.  I have to remove it for summer.  While the focuser is made to run from the computer it came with a hand controller that plugs in in place of the computer for visual use.  The multi-port unit was designed to be nearly parfocal with individual cameras so the focus range also accommodates the visual range of my eyepieces.  One eyepiece has to be inserted slightly less than all the way.  It is one I rarely use so not an issue.  Again he makes an extension that would solve the problem but I’d bought enough “gold” for now.  By the way, the Colorado fires last summer burned him to the ground and he is out of business with his shop a total loss.  He’s my age so now retired.  So if I need any more parts I’ll have to find a good machine shop.

With the fixed mirror it was time to try out the Tpoint-ProTrack idea again.  This time the star size of a one hour stack was only a half second of arc larger that of an individual 10 minute image.  The 10 minute image was smaller than one I guided for 10 minutes.  Limiting the stack to 30 minutes it was also smaller than an individual 10 minute guided image.  So I spent two very long nights making a Tpoint map for the part of the sky I normally image in and tested various areas.  In no test was the star size of a 30 minute non aligned stack larger than that of a 10 minute image taken right after it.  SUCCESS.  Later I found software to automatically make the map faster and more accurately than I could by hand.  With that I now often have no need to align images taken up for up to two hours unless the temperature changed changing my image scale.  The fixed mirror can be returned to factory condition at any time if I want to.  As to what was done I watched but didn’t always follow so I can’t explain very well and won’t try as I’ll likely get something wrong as this was done 8 years ago now.

Temperature created an issue I was warned about.  Since the mirror is fixed to the scope its separation from the secondary changes as the tube expands and contracts.  With a moving mirror scope refocusing keeps the separation constant.  With my system that changes.  That distance changes the amplification of the secondary so as temperature drops my image scale increases.  The increase can be as much as 5 or 6 pixels.  Standard stacking software will correct for star position and rotation but won’t correct for a change in image scale.  More money to feed the black hole.  At the time there was only one piece of software that would do this (now there are others to choose from and they do far more than just adjust image scale while aligning but at the time RegiStar was the only option.  “Only” $135 (more today).  Our galaxy’s massive black hole wasn’t looking all that massive any more.

I now had a system that could do everything I’d wanted when I started the project.  I wasn’t close to being able to do it yet unfortunately but the gear could in the right hands.  My software for processing images was weak and my abilities to use it even weaker.  I was still only doing mono imaging.  I knew there was much more in the mono images than I was getting out of them so I still had a lot of learning to do.  Feeding the black hole was looking like a never ending affair.   I was taking color data but didn’t have software for processing it very well so was concentrating on getting the most out of the mono images for now.  I’d tried putting a couple together but ended up with colors that were odd.  I called M74 the “Dirty Motor Oil Galaxy” as that’s the color of its dust lanes I was coming up with.  Everything also seemed to have a green tint.  Removing it created more issues than it solved.  I was totally lost and drowning in data I didn’t know how to process effectively.  Other’s with the right software and skill to use it could get a lot more out of my data than I was.  I needed both better software and the skills to use it.  This after nearly a year of operation.  It seemed the fog of ignorance was just getting thicker the more I learned.

Next time I’ll cover how I’ve reduced but never eliminated that fog.  Of course that also meant keeping the black hole well fed.

Above: System with the good focuser, the main imaging camera and the ST-7 mounted in the guide port.  An eyepiece is one of the visual ports.  The eyepiece must be replaced with an opaque cap for imaging as even under my dark skies light gradients can enter through the eyepiece.  A fact I learned the hard way.

Above: Mono image of NGC 4565 with the 14” of the same field as the 6” image last month.  I still had a long climb up the learning curve of both taking and processing data ahead of me.  Still, at the time, it looked pretty good to me.  I might have given up if I knew how bad it really was.

Acquiring and Learning Adequate Software

While I now had most of the hardware I needed my software was sorely lacking and my ability to use it was also very weak. The mount came with The Sky Pro 6 and Tpoint as well as CCDSoft. The Sky was designed specifically for the Paramount by Software Bisque who made the mount. It works with other mounts as well but some features only are available with the Paramount. Tpoint was adapted from software developed for the two Gemini 8 meter scopes and can be used two ways. In one use it works much like simpler software for most go-to mounts that use a few star pointings to refine their pointing accuracy but is far more capable but more complicated to use. For this purpose it needs 6 star pointings, three on either side of the meridian for a GEM mount. After that the mount will point with sufficient accuracy to put the object on even a very small CCD chip. That is it will usually be within a couple minutes of arc accuracy. But where it really shines is when you map more than 6 stars, a lot more. Given enough points (I use 800 though most consider that overkill) it learns precisely how the mount points and tracks in all parts of the sky that are mapped. This is done using alt-azimuth coordinates so is independent of the stars themselves. It just knows that when asked to point at say an azimuth of 87 degrees and altitude of 43 degrees the mount really points to 87.12 degrees by 42.93 degrees for a made up example. The way it works it learns how gear errors, mount, scope and optic flex under gravity, atmospheric refraction and other effects cause the mount to point differently than it would under ideal conditions with perfect gears, no sag and no atmosphere to move an object higher in the sky than it really is.

It took me months to learn how to make a map quickly. Current versions will now do this automatically with just a couple mouse clicks and it takes it from there. I had to feed the black hole (very low calorie diet fortunately) for an add-on that did this. A free version I tried didn’t work well but the pay one did.

Also The Sky was not intuitive (still isn’t) and took me a long time to get used to. I still have to stop and think how to do some things and forgot one minor thing I used to know how to do that I can’t seem to figure out how to do easily again. Still, once mastered, it is very powerful allowing me to track fast moving asteroids and take my images without guiding, a bane of most imagers. The freedom of finding a usable guide star is wonderful!

But this is more a hardware control issue. The real problems I had were with getting good data and then turning it into a good image. There my software stunk. CCDSoft came with the mount and camera. It was a joint project (now defunct) of the camera and mount maker put out by the makers of the Paramount. It had one major competitor at the time, Maxim D/L. Both were many hundreds of dollars but when you get one “free” in the price of the camera and mount (three copies no less) and a book I bought showing how to use both seemed to indicate they were equals you go with the free copy. They aren’t equals! Not even close. CCDSoft is fine for image acquisition. I still use it for that in fact (with a dither plug-in) but once the data is acquired and it is calibrated (dark subtracted and flat fielded) it is mostly very poor. But I didn’t know this and used it for image stacking (the free Deep Sky Stacker that didn’t exist then is far superior in this regard and some pay programs that did exist are even better). This left a lot of noise in my images that could have been removed (see the image of NGC 4565 last month for a noisy example). Noise does more than just add dark and light pixels to the image, it reduces sharpness and hides low contrast details. While it can’t be avoided completely, you don’t want to add any needlessly and the very basic stacking modes of CCDSoft are poor in this regard.

Once aligned (I’ve already mentioned I had to use the expensive RegiStar for this as it was all that existed at the time that could handle changing image scale besides misalignment) I was using good 8 bit image processing software to further process the image. What I wasn’t thinking about is that the camera generates an image that has 65536 different levels of intensity for each color while 8 bit software uses only 256. For every level 8 bit software uses 256 levels have been discarded! I was using only 0.4% of the data I collected. Now that’s dumb. At one time that was all that was available but even back in 2006 there was 16 bit Photoshop that didn’t discard all that data. Once I saw what another imager did with my data who used good stacking software as well as Photoshop CS (the best version at that time) I realized I had to feed that darned black hole yet again. At the time CCDStack was about $175 and Photoshop over $600. Having a poor internet available here I had to pay $200 for CCDStack to get the CCD version rather than just download it. That also meant another $100 for good tutorial CCDs to learn how to use it as on line tutorials were just too herky jerky with my connection to be usable. Then there was Photoshop…

Here I got lucky. A science teacher I’ve known for 30 years at a school in Walker had helped me set up the scope as some of the original tasks took three people — like lifting the scope onto the mount for one example. The school had a 36 license package for Photoshop but only was using 31 of them. It had no need for the other 5 but the package was cheaper than getting just the 31 they needed. I was allowed to use one in exchange for the school using the images in their science classes. Also since Photoshop is complex they had students who knew all of its hidden features. Turned out they knew these for ordinary photos and were nearly as lost as I was for processing deep sky images. Still they understood how to create a color image out of black and white filtered images and other things I needed to learn. So we both learned together. I also could use their computer and internet to watch some free on line tutorials on astro image processing with Photoshop. From these I quickly learned I’d need several plug-ins to Photoshop. These fortunately are either low cost or free. So the black hole wasn’t being as well fed has it had been at least.

Going from mono to color added additional complications. The atmosphere scatters blue light. The lower the scope is pointed the more blue that is lost to scattering (think red sunsets to see the issue). A method of color correction is needed to put that blue back into the image. Also color filters are not designed for a particular sensor. Each sensor has a different response curve. Some are red sensitive, mine is blue sensitive. Filters are generic. Some manufacturers (Astrodon being the main one) do make filters that are either for red sensitive or blue sensitive cameras. But again this is only a general trait. None are a close enough. So you have to learn to compensate for both filter and atmospheric effects. I first learned what is known as G2V processing where you take a white star at the same altitude as your image and use it as a white reference. Now I often use a free program called eX-calibrator that compares the color frames you’ve taken to Sloan survey photometric data and gives you the needed white balance figures. Unfortunately Sloan doesn’t cover the entire sky and its fall back survey, NOMAD, isn’t all that good. I usually fall back to G2V for those areas. Even this however was leaving a green cast to my images (in making the adjustment green is not adjusted, only red and blue are. I fought this for years until a free plug-in for Photoshop (based on a Pix Insight routine) called Hasta La Vista Green was made available. This eliminates the green glow of air glow and light pollution that tainted even the G2V and Sloan data. Not everything needed feeds that black hole! Other plug-ins and helper programs are now in my list of “required” software.

This teaming with the school had another unexpected consequence. That school told others who told others etc. Now I send the images out to hundreds of schools most of which are NOT in the US. Most would but the idiot “Teach the Exam” system we have now gives the US schools little time for it. Even now at Walker it is an after school event. That is now going away as the teacher retired as of June 30 and isn’t being replaced. Since my updates now go to mostly foreign destinations but for a hundred or so amateurs (again many not in the US) I’ve shifted to using only metric in the updates which surprisingly riled some the few US schools on the list. I’d think science teachers would appreciate that. Apparently not all did. I’ve not lost many but they do grump and many ask for US units and grump more when I make them do that work. Many in the club now get it either direct or via relay by Dave Churilla. That version includes comments after the pictures not sent to the schools about life here in Minnesota which some love and some don’t. It goes out usually every other day, sometimes more or less. That has become more of a chore than I expected keeping to a regular schedule but teachers want that and since they are in both northern and southern hemispheres (more in the southern interestingly enough) it is an all year project.

It also attracts the “fringe” out there as it gets forwarded many times reaching some rather mentally challenged folks out there. I could write a book of all the wacko junk they think is real science. You don’t know to laugh or cry reading it. I used to think I heard from the crazy folks at Hyde Observatory like the one who insisted that the computer glitch that temporarily took out Spirit shortly after it landed was due to the CIA destroying it to hide their base on Mars. Those can’t compete with the stuff I now get in thanks to my Observatory Updates being forwarded over and over again until it reaches a “crazy.” Every NEO (Near Earth Asteroid) is going to destroy us not by hitting us but with its magnetic field that will rip all the iron out of the earth or create electric bolts that will electrocute everyone. You don’t want to know how many tell me “Yeah, I was wrong about it last time but this time it has been confirmed by NASA.” Every fall with the sun getting lower in the sky I get this as “proof” the earth’s axis is tilting over and will soon flip. But in the spring when the sun is rising rapidly each day they say nothing. Apparently a sun moving lower is bad but one moving higher is good. But I digress.

I thought the observatory was now complete though it was obvious I’d need more time to get the most out of my image processing software. Still it was my lack of ability to use it rather than its lack of ability that was now the problem. That would come in time. At least the hardware was now all in place.

Ok by now you likely have guessed I was wrong again. It turned out I needed to feed that super-massive black hole in my bank account yet a few more times and one of those wouldn’t be a small purchase. I’ll cover that in the last installment next month. I’ll also cover what I might have done differently if today’s equipment and software had been available in 2005.

Above: NGC 4565 a third time.  This time using proper software to acquire, calibrate, stack and process the image using the color data I’d collected but didn’t know how to use effectively.  Still this one is a couple years old and I’d likely get more out of it if I redid it today.  Note that the ends of the galaxy are warped.  The full size image, too big for the newsletter, is at:

The Black Hole Keeps on Growing

I thought I now had my system complete. I could now image without any of the issues I’d fought since the 1950’s. I just flipped a switch turning everything on. Fired up the computer and started imaging. It all worked as I had hoped when I started down this road. I could even script a session and go out for dinner and a movie, come home and go to bed while the system worked for the entire winter’s night taking the data I had scripted for it. I could get up the next day and see hundreds of megabytes of good data. What more could I possibly need? Turned out quite a bit more feeding of that super-massive black hole in the bank account was needed.

In the photo of the original set up a couple issues back you see there’s a dew heater around the corrector plate but no dew shield. I’d been told by several users of SCT scope that a dew shield wouldn’t help much. But a good dew heater would do the job, no dew shield needed. They didn’t live on a lake! I soon was seeing my data fade as the night progressed.

It stayed clear but it was as if a neutral density filter had been added. Looking at the corrector showed it heavily dewed over even with the dew heater at maximum. Adding a dew shield over the dew heater has solved the problem. The heater now runs at 25% and stays dew free.

Fortunately this was a minor expense compared to what was to come.

With the addition of the dew shield my data suddenly improved in brightness. I had been working with a fogged corrector and not known it for some time! Another piece of hardware I soon added was a computer controlled outlet box for AC power. I had a switch in the house that turned on the outlets at the pier. With the cameras, focuser and mount and dew heater plugged into these I could turn them on and off from the house but that turned everything on and off. Some nights were low humidity so I didn’t need a dew heater. Usually I was only using one of two cameras. Some cloudy nights I took my darks and didn’t need anything but one camera turned on. Why run gear not needed? The cheapest solution I found was offered by RoboFocus that made the controller for the focuser. They offered a 4 outlet computer control box. With the RoboFocus plugged into the switched pier outlets the two cameras, mount and dew heater are plugged into the 4 outlet box and I can now turn on only what I need from the house. When taking darks I really don’t need the focuser powered but otherwise it works well and cost half what a dedicated outlet system cost. No new software needed either. This too was a rather minor expense.

I soon found I needed one more piece of rather expensive hardware. With the system I can script a night’s activity and go to bed but I found I wasn’t sleeping many nights worrying about rain. What if it started to rain and there wasn’t thunder to wake me up. The solution was a cloud sensor. Again, at the time there was only one available, the Boltwood.

That created a serious feeding of the black hole at $1400! Today it is $1800! It will sense the first hint of rain or even just clouds and park the telescope and shut the roof turning off everything. Today there are considerably cheaper alternatives in the $350 to $500 range. From reports they are just as effective but don’t function with as many observatory control programs.

I’m still learning to process my data. That seems to be a fog that never goes away. New processing tricks are being developed all the time and I even come up with a few. The result is that everything I did a couple years ago could be greatly improved with reprocessing. But with over 800 images over two years old that’s a lot of reprocessing that will likely never happen. Besides, I’d just need to redo all that and more in another two years. It would be a never ending nightmare.

Would I do things differently today. Of course I would. There are new and better scopes and camera’s available today. Even mounts are better with some being direct drive with precision sensors that completely eliminate periodic error and need only a couple dozen pointings to track without guiding (but you will really feed that black hole to get this convenience). Rather than the rag-tag software system I now use, PixInsight claims to do it all in one package and many top imagers are moving to it for most things though nearly all still use Photoshop as well for the final touches. It has a horrid learning curve but those that master it say it is all worth the hassle and they are putting out images to prove it. Some of the Photoshop plug-in I use are based on processes in Pix Insight. It was developed from the ground up for deep sky imaging. It is far cheaper than Photoshop alone let alone all the other software I use though doesn’t do image acquisition. A new, inexpensive program called SG Pro is now gaining traction for that.

While it works with SBIG camera’s internal guider or external guide head it uses the free PHD guide program which doesn’t automatically adjust for declination but requires recalibration every image unlike Maxim D/L.

CCDSoft is a dead program and its replacement requires a new version of The Sky with features I just don’t need to buy. Or if I feel really rich I could get Maxim and ACP (nearly $3000 for both!). Ok I don’t feel that rich. For now I’ll stick with CCD Soft as it still works fine for image acquisition and my script files.

One thing has changed since I mentioned Focus Max as a free solution to auto focusing images if you have a good quality computer controlled focuser (that’s not free unfortunately). While the free version of Focus Max is still available at their site a new version that is said to much more user friendly is now being sold by CCDWare for $150. The imaging community is saying this is too high but buying it anyway. The free version is nasty to get working as it requires ASCOM drivers and other support software (also free) and getting it all to talk to each other can be “interesting”. I’m told the pay version is plug and play simple.

So now you have a choice. Since I have my old version running fine (not without lots of effort) I’ll not feed the black hole for the new version.

Of course you can do things a lot cheaper than I have. By working wide angle with small APO refractors with field flatteners you can get by with far cheaper mounts and cameras with smaller chips and somewhat less expensive scopes though the savings there is smaller. By not running remotely you can avoid automatic focusing issues and use a simple Bahtinov mask for manual focusing. Just remember to remove it for the image. Seems many are forgetting that to their dismay. There are inexpensive image acquisition programs like Nebulosity and the free PHD mentioned above for guiding. A simple guider can be made from say the inexpensive Meade DSII camera found all over the used market at very low cost. Free software is available to guide using an old web cam as well.

Free Deep Sky Stacker can do a good (not great but darned good) job of stacking low noise images. One shot color cameras, while inefficient and ill suited for some things, can save a lot of money by eliminating the filter wheel and filters.

My goal was to image no matter how cold it was or how thick the skeeters and to do so while I went out to eat and a movie or at 3 a.m. while sleeping. That has been achieved. I can run all night our long winter nights without losing any sleep. I figured that worth the cost though I’ll admit I didn’t realize how well fed that black hole was to become.

Still it was worth all the effort and cost. Now I just let the computer do most of the work unless I need to force an object. It will decide if it is clear enough to open, and what objects on my to-do list are best positioned for that time and take the data, shutting down before the weather goes bad on me. All I do is keep the to-do list fed with objects. It now sits at about 750 of them so I’m keeping up my side of the deal. Now if I could just automate the image processing side to a similar extent I’d have time again for fishing or time on the black powder range. In any case it is super nice to just flick a switch to turn on power to the observatory and in a couple minutes be acquiring data. While glitches can occur they are now few and far between. I have the capability to add a wide field system on top of the current system.

That keeps getting put off by all the items I keep adding to the high resolution to-do list.

This brings me to the end of my journey so far. I’m sure the black hole will continue to be fed, just not as high a calorie diet it is used to.

I’ll be learning new processing tricks as well. But for now I have a system that can easily beat the images that I lusted over in the 1950’s and 1960’s by the 200” Palomar telescope, at least on nights of good seeing. The system goes deeper and with more dynamic range than the film days of the major observatories. And does so in about the same amount of time. This allowed me to show a rather famous “jet” in a 1965 image of Arp 192 wasn’t a real feature of the galaxy. My images have been used now in nearly a dozen Masters and PhD theses by students on many continents, but not this one. One showed a previously unknown outburst of a flare star in a galaxy about 35 million light-years away that threw a monkey wrench into a student’s thesis. Another showed why the rotation curve of a galaxy was so messed up. It was really two galaxies seen one directly in front of the other such that Palomar images failed to show there were really two. (Turns out a pro beat me to that one with an article on it in a rather obscure journal I didn’t check). I’ve made a movie showing gasses flowing down the tail of a comet. Found a dozen or so previously unknown asteroids. Most still unknown since I didn’t find them until too late to recover them and define an orbit. The list goes on.

I never expected any of that to happen. I just wanted to improve my imaging abilities from my crude film days. That I could actually make a contribution here and there seemed impossible. But even more advanced amateurs now work with the pros since they have image processing skills to pull out features the pros can’t as they just don’t have the time needed to learn how to do this. One of my images resulted in the discovery of a new planetary nebula. Unfortunately I didn’t catch it.

Another amateur on my update list did. He took a verifying image from central Berlin! It now is officially Le1 since he reported it and I didn’t. If I’d have been on my toes it would have been Jo1. Close but no cigar!

The effort and cost has been worth it, well the effort has. I’m not so sure about that every growing black hole in the bank account. The results have far exceeded my expectations. I continue to surprise myself such as when I discovered I’d picked up a dozen or so planetary nebula in M31. I managed to image a few globular star clusters in that galaxy in the 1960’s but never expected to pick up something as small and faint as a planetary nebula. I also was shocked to find a quasar at more than 12 billion light-years look back time (z greater than 4). Now that, I find, is quite common in my images. To think at one time 100 million light-years was beyond my reach. If someone had told me I’d have a system that could see gravitational arcs made famous by Hubble Space Telescope images I’d have thought them nuts but that is quite within my range given sufficiently good seeing.

Thanks to the mount’s ability to track fast moving asteroids I recently picked up a rock just 22 meters across passing us at a distance of over 600,000 kilometers (almost twice the moon’s distance). Today’s digital equipment has greatly leveled the playing field between amateur and professional. They just do in seconds what takes me hours to do. But then my system cost many hundreds of millions of dollars less than theirs. The Sloan survey scope (2.5 meters) can easily reach 24th magnitude in 27 seconds, I need a couple hours on a good night to do this. I used to impress myself reaching 16th magnitude on film now I go 1600 times fainter in the same time and do it literally in my sleep. Today’s technology is amazing but far from cheap.

Makes me wonder where it will be in another 50 years.

One warning: This series may make it seem that if you buy the right software and hardware you can simply install everything and be imaging while you sleep. It doesn’t work that way. It took me several years to work out all the issues that arose along the way. You need to be able to image very accurately every time manually before you will know the many dozen values needed to enter into the automation software to get it to work. The only way to avoid this is to use one of the many rent-a-scope systems now available. They have, you hope, set up the systems they rent out so you don’t have to do that work. Still you will have the steep learning curve of coaxing an image out of your data. I recommend starting there before you spend far bigger bucks only to find out you don’t like the tedium of calibrating and processing images.


Copyright 2014, Rick Johnson, All Rights Reserved.