4G LTE Residential Broadband?

Sierra Wireless AirLink GX450
 
Though I didn’t properly appreciate it at the time, I was pretty fortunate to grow up in America’s rural midwest. A land of crops and livestock, open spaces, families, pickup trucks and freedom. And while low population density has many upsides, there are tradeoffs to every environment. One such tradeoff is the fact that broadband was much later in coming there than anywhere else I’ve ever been. And much less effective.

My folks back on the farm have been limited to around 1 Mb/s service at the best of times via DSL from Frontier Communications. A Frontier service representative told me last year on a support call that their DSL service had become oversubscribed in the area. Dad had come to expect fairly predictable service outages nearly every day. Frontier DSL in the area had become the Internet equivalent of living in a developing nation where electricity can’t be counted on around the clock.

While visiting my folks over Easter weekend, I happened to be updating one of my dad’s PCs to a newer release of Linux, necessitating that I also download around 1 GB of operating system updates. Rather than suffer through the download via his DSL, I used my AT&T iPhone as a Wi-Fi hotspot and downloaded the necessary files in minutes rather than hours. I was somewhat surprised by the speed of AT&T’s LTE cellular network, and assume that they have upgraded the local tower since my previous visit. In addition to being much faster than DSL, it was also faster than what I’d seen on Verizon in the area. Sprint is barely there, and T-Mobile is practically non-existent, though cellular networks are expanding all the time. That Easter visit got me to thinking.

Developing nations skip cabled telephone or Internet infrastructure and go straight to cellular networks. Perhaps for swaths of rural America, a similar approach makes sense at some point. But are we there today?

4G LTE Modem
I wanted to try out a 4G LTE modem in place of dad’s existing DSL modem. But which one? While AT&T sells a range of consumer-grade cellular Internet offerings, I wanted something with a high degree of flexibility and control. Something that could reasonably be expected to provide 24×7 always-on Internet service via AT&T’s LTE cellular network.

So I ended up ordering a Sierra Wireless AirLink GX450 from reseller USAT Corporation of Chapel Hill, North Carolina. The AirLink GX450 is offered in both AT&T and Verizon-friendly versions, and starts at $499 with an AC adapter, before one adds an antenna or any extra modules. I added a penta-band indoor rubber-duck style antenna for an additional $25. Outdoor and mobile vehicle antennas are available.

The AirLink GX450 is an industrial unit in a steel case, designed for mobile and machine-to-machine applications. This unit would look at home in a police car, delivery vehicle or inside an ATM machine or kiosk. The configuration options are extensive, with around ten pages of settings. Expect to see all of the firewall and VPN options that you’d need to build out a secure data network using these units.

Activation
I stopped in at an AT&T store near my office in southern Connecticut to get a SIM card and to activate the AirLink GX450. This particular device takes a Mini-SIM (2FF), which contrary to its name, looks absolutely huge in the era of Micro and Nano SIMs. A phillips screwdriver is required, as one has to remove the top portion of the GX450’s enclosure to get to the SIM card socket. The AT&T saleswoman was very helpful and conversational, and got my transaction done in about ten minutes. There was no activation fee. The unit will cost an additional $20 a month on my AT&T Mobile Share Value Plan. It’ll pull from the same data bucket as any other device on my AT&T account, including my iPhone, iPad and a 2015 Chevy Silverado. We’ll talk more about the per-gigabyle cost later.

Configuration
The Sierra Wireless AirLink GX450 comes with a configuration sheet identifying how to set it up via a web browser. If you’re an IT guy or gal, or have set up home Wi-Fi routers, the interface is intuitive. I made the following changes from the defaults.

  • Enabled Keep Alive by setting the device to ping 8.8.8.8 on a five-minute interval when there is otherwise no activity.
  • Set an alternate primary and secondary DNS server, as Sierra’s default DNS provider is way slower than it should be.
  • Set Inbound Port Filtering Mode to only accept inbound traffic on specified ports, and then didn’t specify any. Consider this activating a firewall.
  • Disabled the AirVantage Management Service, as remote administration is not needed.
  • Disabled GPS service.
  • Changed the default password to a randomly-generated one.

Installation
Following preliminary testing at my residence, I boxed up the GX450 and FedEx’ed it to my parents. A day later, I called up and spoke with my father, now in his mid 70s. It took us maybe 15 minutes to talk through the process of shutting down and removing his Frontier DSL modem, and putting the AirLink GX450 in it’s place. Following some device reboots in the proper order, he now had Internet access via AT&T Wireless.
 
Another Speedtest via ATT Wireless
 
Speed
Dad’s first speed test came in at 8.20 Mb/s down and 4.74 Mb/s up. A later test would show 13.45 Mb/s down and 11.78 Mb/s up. Sure, for those of us living in more densely populated areas, these speeds aren’t exactly impressive. For instance, the download speed doesn’t meet the Federal Communications Commission’s current definition of broadband: 25 Mb/s down and 3 Mb/s up. But this bandwidth is 10 times as fast as dad’s typical recent experiences with Frontier DSL at their address. More important, it’s fast enough to get things done. And fast enough for remote knowledge workers too.

Reliability
Having initially determined the speed to be satisfactory, the next question would be reliability. Would this AirLink GX450 hold its connection to AT&T and give dad uninterrupted service that he can count on day to day?

During the first two weeks, the AirLink GX450 and AT&T delivered 24×7 residential broadband at his address with bandwidth and reliability that he hadn’t experienced previously. The only noticeable slowdown came around day 7, during Saturday evening primetime hours, with symptoms that suggested possible saturation of AT&T’s uplink to the Internet.

Cost
Whether any experiment is successful or not, it’s often worth doing. And if this experiment were to prove a failure at this point, it might only be in the area of cost. Dad used 6.9 GB of data in his first full week on AT&T 4G LTE. I hoped that the week was an anomaly, as I found myself flying out for another visit at the end of the week to perform some data-intensive maintenance on a second PC at the house. But the first week wasn’t an anomaly.

Were I to continue this experiment indefinitely, I’ll have to up my AT&T Mobile Share Plan to 30 GB of data at $225/month, plus the $20/month access fee for the AirLink GX450, and the various charges for my aforementioned iPhone, iPad and Chevy Silverado. That kind of spending is viewed as luxury self-indulgence by anyone in my family, something that we should be embarrassed to even mention. To put it another way, AT&T’s cost over the incumbent Frontier DSL scales linearly with the 10x boost in performance. At the same time, price is always relative, and there are those in this world who could demonstrate a decent return on investment with this improved connectivity.  Ultimately each of us has to decide for ourselves.

At the very least, if choosing 4G LTE as residential broadband, one would have to follow my sister’s advice when she first heard of this plan. “Just don’t tell them about Netflix!”

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookShare on RedditEmail this to someone

Installing VMware Horizon View Client on Ubuntu 14.04.1 LTS 64-bit

VMware Horizon View Client
 
These days I tend to do nearly as much work from a ThinkPad running Ubuntu 14.04 64-bit Linux as I do from various Microsoft Windows PCs on my desk.  And for much of my work, Ubuntu Linux functions as well as anything.  But there are certainly cases where I either need or prefer access to a Windows desktop.  And at my company, a Windows session is as close as connecting to our VMware Horizon View environment.

Historically, installing the VMware View client on Ubuntu was a one-line affair.  But when I try the same approach on Ubuntu 14.04 64-bit, I receive the error, “Unable to locate package vmware-view-client”.  Never fear.  Simply by adding a couple of extra lines to enable a repository, I’m soon good to go with VMware’s official Horizon View Client.  If you find yourself needing the same functionality, simply run the following commands, one at a time, on your Ubuntu 14.04.x 64-bit desktop.

That’s it. Once done, you can search your machine for VMware and drag the icon to the Launcher for easy access in the future.

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookShare on RedditEmail this to someone

Red Hat High Availability (in Toronto)

CN Tower
 
Last week I had the opportunity to attend a session of the four-day class, Red Hat Enterprise Clustering and Storage Management (RH436), held in Toronto.  It was a busy week, and that lovely view of the CN Tower above, as seen from my hotel room window, had to suffice for experiencing the city.  Fortunately I’ve been here before, looked through the Glass Floor, and generally done the tourist thing.  So let’s get down to business.

Purpose Of The Trip
At the radiology practice in which I work, we’ve long relied on Red Hat Enterprise Linux as the operating system underpinning our PACS (Picture Archiving and Communication System) that is the heart of our medical imaging business.  For awhile, most of the rest of our back-end systems ran atop the Microsoft family of server products, just as our workstations and laptops run Microsoft Windows Professional.  But over the last couple of years, the Microsoft-centric approach has gradually started to shift for us, as we build and deploy additional solutions on Linux.  (The reasons for this change have a lot to do with the low cost and abundance of various open-source infrastructure technologies as compared to their Microsoft licensed equivalents.)  But as we build out and begin to rely on additional applications running on Linux, we have to invest time in making these platforms as reliable and fault-tolerant as possible.

Fault Tolerance, Generally Speaking
The term ‘fault tolerance’ is fairly self-explanatory, though in practice it can cover a substantial amount of ground where technical implementations are concerned.  Perhaps it’s best thought of as eliminating single points of failure everywhere we can.  At my employer, and perhaps for the majority of businesses our size and larger, there’s already a great deal of fault tolerance underneath any new ‘server’ that we deploy today.  For starters, our SAN storage environment includes fault tolerant RAID disk groups, redundant storage processors, redundant Fibre Channel paths to the storage, redundant power supplies on redundant electrical circuits, etc.  Connected to the storage is a chassis containing multiple redundant physical blade servers, all running VMware’s virtualization software, including their High Availability (HA) and Distributed Resource Scheduler (DRS) features.  Finally, we create virtual Microsoft and Linux servers on top of all this infrastructure.  Those virtual servers get passed around from one physical host to another – seamlessly – as the workload demands, or in the event of a hardware component failure.  That’s a lot of redundancy.  But what if we want to take this a step further, and implement fault tolerance at the operating system or application level, in this case leveraging Red Hat Enterprise Linux?  That is where Red Hat clustering comes into play.

Caveat Emptor
Before we go any further, we should note that Red Hat lists the following prerequisites in their RH436 course textbook: “Students who are senior Linux systems administrators with at least five years of full-time Linux experience, preferably using Red Hat Enterprise Linux,” and “Students should enter the class with current RHCE credentials.”  Neither of those applies to me, so what you’re about to read is filtered through the lens of someone who is arguably not in the same league as the intended audience.  Then again, we’re all here to learn.

What Red Hat Clustering Is…
In Red Hat parlance, the term ‘clustering’ can refer to multiple scenarios, including simple load-balancing, high-performance computing clusters, and finally, high availability clusters.  Today we’ll focus on the latter, provided by Red Hat’s High Availability Add-On, an extra-cost module that starts at $399/year per 2-processor host.  With Red Hat’s HA addon, we’re able to cluster instances of Apache web server, a file system, an IP address, MySQL, an NFS client or server, an NFS/CIFS file system, Open LDAP, Oracle 10g, PostgreSQL, Samba, a SAP database, Sybase, Tomcat or a virtual machine.  We’re also able to cluster any custom service that launches via an init script, and which returns status appropriately.  Generally speaking, a clustered resource will run in an active-passive configuration, with one node holding the resource until it fails, at which time another node will take over.

…And What Red Hat HA Clustering Is Not
Less than two weeks prior to the RH436 class, I somehow managed to get through a half-hour phone conversation with a Red Hat Engineer without touching on one fundamental requirement of HA that, when later identified, shaped my understanding of Red Hat clustering going forward.  So perhaps the following point merits particular attention: Any service clustered via Red Hat’s HA add-on that also uses storage – say Apache or MySQL – requires that the cluster nodes have shared access to block level storage.  Let’s read it again: Red Hat’s HA clustering requires that all nodes have shared access to block level storage; the type typically provided by an iSCSI or Fibre Channel SAN.  Red Hat HA passes control of this shared storage back and forth among nodes as needed, rather than having some built-in facility for replicating a cluster’s user-facing content from one node to another.  For this reason and others, we can’t simply create discrete Red Hat servers here and there and combine them into a cluster, with no awareness of, nor regard for, our underlying storage and network infrastructure.  Yet before anyone goes dismissing any potential use cases out of hand, remember that like much of life and technology, the full story is always just a bit more complicated.

Traditional Cluster
Let’s begin by talking about how we might implement a traditional Red Hat HA cluster.  The following steps are vastly oversimplified, as a lot of planning is required around many of these actions prior to execution.  We’re not going to get into any command-line detail in today’s discussion, though that would make for an interesting post down the road.

  • We’ll begin with between two and sixteen physical or virtual servers running Red Hat Enterprise Linux with the HA add-on license.  The physical or virtual servers must support power fencing, a technology that allows a surviving node to separate failed nodes from possibly writing to shared storage by shutting the failed node down.  This is supported on physical servers by Cisco, Dell, HP, IBM and others, and is also supported on VMware.
  • We’ll need one or more shared block level storage instances accessible to all nodes, though one at a time.  In a traditional cluster, we’d make this available via an iSCSI or Fibre Channel SAN.
  • All nodes are on the same network segment in the same address space, though it’s wise to isolate cluster communication to a separate VLAN from published services.  Multicast, IGMP and gratuitous ARP are supported on our segments.  There’s no traditional layer 3 routing separating one cluster node from another.
  • We’d install a web-based cluster management application called Luci on a non-cluster node.  We’re not concerned about fault-tolerance of this management tool, as a new one can be spun up at a moment’s notice and pointed at an existing cluster.
  • Then we’d install a corresponding agent called Ricci (or likely the more all-encompassing “High Availability” and “Resilient Storage” groups from the Yum repository) on each cluster node, assign passwords, and set them to start on boot.
  • At this point we’d likely log into the Luci web interface, create a cluster, add nodes, set up fencing, set up failover, create shared resources (like an IP address, a file system or an Apache web service) and add those resources to a service group.  If that sounds like a lot, you’re right.  We could spend hours or days on this one bullet the first time around.
  • Before we declare Mission Accomplished, we’ll want to restart each node in the cluster and test every failover scenario that we can think of.  We don’t want to assume that we’ve got a functional cluster without proving it.

What About Small Environments Without a SAN?
It’s conceivable that someone might want to cluster Red Hat servers in an environment without a SAN at all.  Or perhaps one has a SAN, but they’ve already provisioned the entire thing for use by VMware, and they’d rather not start carving out LUNs to present directly to every new clustered scenario that they deploy.  What then?  Well, there are certainly free and non-free virtual iSCSI SAN products including FreeNAS, Openfiler and others.  Some are offered in several forms including a VMware VMDK file or virtual appliance.  They can be installed and sharing iSCSI targets in minutes, where previously we had none.  Some virtual iSCSI solutions even offer replication from one instance to another, analogous to an EMC MirrorView or similar.  In addition to eliminating yet another single point of failure, SAN replication provides a bit of a segue into what we’re going to talk about next.

What About Geographic Fault Tolerance?
As mentioned early on, at my office we already have several layers of fault tolerance built into our computing environment at our primary data center.  When looking into Red Hat HA, our ideal scenario might involve clustering a service or application across two data centers, separated in our case by around 25 miles, 1 Gbit/s of network bandwidth and a 1 ms response time.  Can we do it, and what about the shared storage requirement?  Fortunately Red Hat supports certain scenarios of Multi-Site Disaster Recovery Clusters and Stretch Clusters.  Let’s take a look at a few of the things involved.  Be aware that there are other requirements.

  • A Stretch Cluster, for instance, requires the data volumes to be replicated via hardware or 3rd-party software so that each group has access to a replica.
  • Further, a Stretch Cluster must span no more than two sites, and must have the same number of nodes at each location.
  • Both sites must share the same logical network, and routing between the two physical sites is not supported.  The network must also offer LAN-like latency that is less than or equal to 2 ms.
  • In the event of a site failure, human intervention is required to continue cluster operation, since a link failure would prevent the remaining site from initiating fencing.
  • Finally, all Stretch Clusters are subject to a Red Hat Architecture Review before they’ll be supported.  In fact, an Architecture Review might be a good idea in any cluster deployment, stretch or not.

Conclusion
While many enterprise computing environments already contain a great deal of fault tolerance these days, the clustering in Red Hat’s High Availability Add-On is one more tool that Systems Administrators may take advantage of as the need dictates.  Though generally designed around increasing the availability of enterprise workloads within a single data center, it can be scaled down to use virtual iSCSI storage, or stretched under certain specific circumstances to provide geographic fault tolerance.  In today’s 24×7 world, it’s good to have options.

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookShare on RedditEmail this to someone

Sendmail For Outbound Alerts On Red Hat

Orange Mail Sign
 
Last week we discussed implementing Monit on a single instance of Red Hat Enterprise Linux to monitor local services, alert us to their failure via e-mail, and restart them as necessary.  We made the assumption that we’d be able to point to an existing SMTP e-mail server through which we’d mail out those alerts.  But what if we want to eliminate this dependency, and implement an SMTP service directly on the Red Hat server itself, through which Monit (or anything else running on this server) will be able to mail out alerts? Today we’ll walk through this fairly simple addition using Sendmail.

Before we get started, we should mention the following caveats:

  • The Red Hat Enterprise Linux server on which we’ll perform this installation must be registered with the Red Hat Network in order to download the necessary packages.
  • Your server will rely on a valid DNS server that can provide MX record lookups for any recipient.  This is a common feature of DNS generally, and is available in most corporate environments.
  • Assuming you want to send alerts to any recipient outside your company, your server will need to be able to talk out to the Internet via TCP port 25.  Many organizations restrict this via web filter technology.  If you’re setting up Red Hat servers in a corporate environment, then you know who to talk to about Internet filtering.  Finally, many home ISPs prohibit outbound connections on port 25 to try to limit spam from zombie computers.
  • For any command lines in the process shown below, triple-click the line to select it so that you’re sure you’ve got it all, and that it hasn’t wrapped off the right side of the screen.  You should then be able to copy and paste it into a Linux terminal or SSH session.

Let’s get started.

  1. First let’s make sure that our prerequisites are installed.  On a clean install of RHEL 6.5, only one is missing, but lets go ahead and check for all three in a single command…
  2. Install Sendmail…
  3. Build sendmail.cf using m4 macro…
  4. Add the domains from which your server will send mail.  You only need to mention your own domain here, not any domains that you may send mail to.  We’ll edit the configuration file with vim, though you could substitute gedit if you’ve installed the Desktop functionality.  *See footnote at end for suggestions on navigating in vim.
  5. Initially the local-host-names file will contain a single line commented out, as noted by the # sign at the front.  Add the domains from which you’ll be sending e-mail.  We added snnyc.com in our example below…
  6. Once completed, write your changes and quit vim or your editor of choice.  Then let’s (re)start sendmail and set it to launch each time the server boots.

That’s It
We’re done.  Any application or service running on this server can now simply be pointed to localhost for all outbound SMTP e-mail.  In initial testing, we’re able to send Monit alerts both to internal Exchange users and to external Google-hosted mailboxes.  Naturally there is more work to be done if you wish this server to relay outbound mail for other servers, or if you wish to also receive inbound mail.  Those topics are out-of-scope for today’s discussion, but could certainly make for an extended blog post down the road.  Finally, in giving credit where credit is due, Sachin Sharma provided this information and more in his post, from which I borrowed heavily.

* Having opened a file in the vim editor, hit the Insert key immediately.  Use the arrow keys to navigate to where you’d like to add new material, or delete old material.  Type as desired.  When finished, hit the Esc key.  Then type :wq to write your changes and quit.  To abandon vim without saving changes, use :q! instead.

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookShare on RedditEmail this to someone

Getting Started With Monit on Red Hat

Monit Console Via Browser
 
As is common in many professional environments today, the technology infrastructure that we support and maintain at my day job includes a wide variety of platforms. Narrowing our focus to operating systems, we run not only Microsoft products, but also several Linux distributions including Red Hat Enterprise Linux, Ubuntu and openSUSE. And while we monitor the applications that ride atop these operating systems using a variety of means, we encountered an interesting new requirement this week. How do we monitor, restart and alert administrators to particular failed services on an instance of Red Hat Enterprise Linux, running only a relatively lightweight process on the server itself?

Potential Solution
A few minutes with Google led us to Monit, where their web site suggests that it comes “With all features needed for system monitoring and error recovery.” Better yet, you can get “Up and running in 15 minutes!” It’s freely available for Linux, BSD and OS X. But would the claims prove true? Let’s put it to the test.

Getting Started
Those familiar with various Linux distros are well aware that package installation differs from one version to the next. The following exercise involves installing Monit on RHEL (Red Hat Enterprise Linux) 6.5. We’ll assume that we’re logged in as root for the duration of this process.

  1. First we have to enable the EPEL (Extra Packages for Enterprise Linux) repository on our RHEL server using the following two commands. The first of these commands is longer than may be depicted in this blog post, but if you triple-click on the line of text to highlight it all, you should be able to copy the full command to your buffer and then paste it into a text editor or an SSH session.
  2. Then we install Monit and start it.
  3. To this point we’ve spent maybe 2 minutes, and technically we’ve already installed Monit. Not bad, but we can’t use it yet. Monit creates it’s own web page that runs on port 2812 by default. If you’re running a firewall, you may wish to open port 2812 using the commands below. Even so, Monit doesn’t allow access to its web page without further configuration beyond the firewall, which will discuss later.
  4. All Monit configuration is done via the file /etc/monit.conf. We’ll use vim to edit it. If you’ve installed the Desktop portion of RHEL, you could use the graphic gedit in place of vim.
  5. You’ll see that virtually the entire monit.conf file is commented out with # signs by default. We’ll have to uncomment various sections and add our own parameters to make it work. Look for the portion that looks like the following and remove the # markers at the beginning of the lines. (Navigate within vim using the keyboard arrow keys, and hit insert before attempting to add any new content.) You might set the daemon value down to 60 if you wish to check at one-minute intervals.
  6. Scrolling further, uncomment the following line and replace mail.bar.baz with the name or TCP/IP address of an internal SMTP mail server through which you want to send your e-mail alerts. We ran our alerts through our local Microsoft Exchange server.
  7. Monit has a default e-mail format that’s usable, but I found myself wanting to customize it. In the example below, everything following the set mail-format, inside { }, represents the content of the alert messages. You can insert these lines following a section of monit.conf that displays the default format.
  8. Next we have to identify who receives e-mail alerts. You’ll see the example syntax: set alert manager@foo.bar. I prefer to only be alerted on things that I should be concerned about, so I might limit my alerts to only the following types. Again, triple-click to highlight and then copy-paste the following line, to make sure that you got it all.
  9. Now we come to the parameters that control the Monit web page on port 2812 as discussed earlier. Uncomment the following code. There are a few things to note here, which we’ve already incorporated into this example. We don’t need the use address parameter if we wish to listen on all interfaces. If we wish to be able to connect from client workstations in a particular IP range, you can do an allow statement with a network number and subnet mask, as seen here with 192.168.1.0. The allow admin:monit section will require that username and password when connecting. You might want to change it from the default.
  10. Finally, we have to decide what to monitor. This is where it gets a bit more complicated, and Google is your friend. The following example can be configured to monitor the Apache web server. Unfortunately, the path to httpd.pid within monit.conf isn’t accurate for Red Hat, but it is accurate in the example below. If you’re having trouble locating the pid file for a particular service, you can always try locate *.pid.
  11. Let’s also put in an entry to alert if our root file system get’s larger than 90% full.
  12. At this point we’re done editing monit.conf for purposes of this conversation. If editing with vim, we’ll hit Esc if we’re still in insert mode, and then :wq to write our changes and quit. Let’s restart Monit and give it a try. You’ll know if you got a syntax error pretty quickly, as Monit will alert you to the error and fail to restart if one exists. (Go back and modify /etc/monit.conf to fix any syntax errors.)
  13. Assuming Monit restarts successfully, you should now be able to connect via a web browser to your host’s name or IP address, followed by :2812.
  14. If you wish to test monit, you might force down apache to see if it restarts, and whether you receive an alert message. Keep in mind that Monit only checks every 120 seconds, or whatever interval we specified.
  15. Assuming your e-mail configuration is correct, and that your SMTP server is relaying e-mail for you, you should receive one message indicating that apache is not running, and a second indicating that it is running once again.

Extending Monit Further
While Monit is free, there’s an M/Monit license that provides consolidated monitoring of multiple Monit clients, as well as a mobile monitoring site. Pricing starts at 65 Euros for five hosts.

Conclusion
Monit meets our requirement of having lightweight application monitoring, restart and alerting running directly atop specific Red Hat Enterprise Linux servers. And while it takes longer than fifteen minutes to fully understand one’s configuration options, one can indeed be “Up and running in 15 minutes” with a little practice. All in all, Monit is a pretty impressive free product.

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookShare on RedditEmail this to someone

InteleViewer on Windows Server 2012 R2

InteleViewer on Windows Server 2012 R2
 
At the radiology practice in which I work, we recently received the following question from the office of a referring physician: Can Intelerad’s InteleViewer PACS imaging client be run on Windows Server 2012 R2?  A member of our IT team reached out to Intelerad Support.  They provided the following advice known to work with Windows Server 2008, and suggested that we try the same on Windows Server 2012 R2.  So we did.

Infrastructure
As is the case in many organizations these days, we would deploy this test installation as a virtual machine running on top of VMware.  Now that we’ve upgraded to VMware ESXi 5.5.0, we’re able to run Windows 2012 successfully, reliably, and best of all, virtually.  I allocated two virtual CPUs and 4 GB of RAM in a fresh virtual machine configuration.

Windows Installation
Because this is a test, and we don’t want to burn a legitimate Windows Server license, I reached for Windows Server 2012 R2 Datacenter Preview Build 9431, a trial installation that I’d downloaded some months ago.  As anyone who has installed Windows Server 2012 R2 can attest, it’s very quick to install it with the GUI console and otherwise default settings.  After that, I installed VMware Tools and all available Windows Updates.  Finally, I enabled Remote Desktop.

Installing InteleViewer
Installing InteleViewer is as simply as going to the InteleBrowser URL of your IntelePACS installation, logging in, scrolling down to Installers, and then choosing InteleViewer Tracks.  Those who support an Intelerad installation on a daily basis are very familiar with this process.  We chose an available ‘64-bit Windows Installer,’ and installed it using all the default settings.  Upon launching InteleViewer for the first time, we added the secure hyperlink to our IntelePACS installation.  Before the day was over, we’d briefly try versions 4-6-1-P122, 4-7-1-P129 and 4-8-1-P65.

Enabling ‘MultiUserMode’
The point of installing InteleViewer on Windows Server 2012 R2 is probably obvious.  An organization wants to allow multiple users to run simultaneous sessions of InteleViewer from a single server via Remote Desktop Services or Citrix.  InteleViewer requires two application-specific registry changes in order to make this work successfully.  The settings are documented below, and also in a PDF file in case the formatting isn’t intuitive here.  You should be able to paste the content into a text file, rename the extension to REG, and then run it once on your Windows Server.  If you wish to create these registry entries manually, note that they are of type ‘String Value.’

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Intelerad Medical Systems\InteleViewer]
“MultiUserMode”=”true”

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Intelerad Medical Systems\InteleViewer]
“MultiUserMode”=”true”

The Test
After installing InteleViewer and making our registry changes, we rebooted the Windows Server for good measure.  A colleague and I were then able to run simultaneous InteleViewer sessions without issue.  It should be noted that both of us are IT Professionals and not medical professionals.  And this limits our evaluation to a binary observation of whether or not InteleViewer works, rather than a substantive evaluation of how well it works.  As mentioned before, we ran versions 4-6-1-P122, 4-7-1-P129 and 4-8-1-P65 successfully.  So far, so good.

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookShare on RedditEmail this to someone

2014 New York International Auto Show

  • Audi A3 TDI sedan
    Audi A3 TDI sedan
  • Chevy Silverado 1500 High Country 4x4
    Chevy Silverado 1500 High Country 4x4
  • All-new Chevy Colorado
    All-new Chevy Colorado
  • Acura TLX
    Acura TLX
  • Ford Mustang 50th Anniversary Edition
    Ford Mustang 50th Anniversary Edition
  • Maserati GranTurismo
    Maserati GranTurismo
  • Lexus LS 460 F Sport
    Lexus LS 460 F Sport
  • New Fiat-inspired Jeep Renegade
    New Fiat-inspired Jeep Renegade
  • Porsche Macan
    Porsche Macan
  • Jaguar C-X17
    Jaguar C-X17
  • Cadillac ATS Coupe
    Cadillac ATS Coupe
  • Lexus RC 350
    Lexus RC 350
  • Bentley Motors display
    Bentley Motors display
  • Stars of Transformers 4: Age of Extinction
    Stars of Transformers 4: Age of Extinction
  • Jeep Cherokee descending 18-ft, 35 degree decline.
    Jeep Cherokee descending 18-ft, 35 degree decline.

 
Life seems to be getting ever busier lately, to the point that Saturday and Sunday have largely become makeup days for whatever work didn’t happen during the week.  But ever so often, it’s good to take a break from the routine and do something fun.  So this weekend I made my annual pilgrimage to the New York International Auto Show at the Jacob K. Javits Convention Center on Manhattan’s far west side.  This is my experience.

Getting There
My trip to the New York International Auto Show always begins in perhaps the most ironic way possible.  I leave my apartment in southern Connecticut on foot, walking to the train station.  Once on the train platform, I swipe my credit card at a kiosk in exchange for a $29.50 off-peak, round-trip ticket to Grand Central Terminal.  The train usually arrives almost precisely on schedule, not hampered by weather in April.  Once at Grand Central, I walk out the doors toward Vanderbilt Avenue and grab a cab over to the Javits Center, typically arriving just as the doors open to the public.

On Site
While you can purchase tickets via the web in advance, I’ve never seen a long line for purchasing them upon arrival.  There was literally no line at a credit card kiosk, where I purchased one adult ticket for $15.00.  From there, I walked through security, consisting of a fairly painless process, after which I presented my ticket for admittance.  Once inside, I try to get a quick feel for the place, knowing that I’ll be back around to take a second look at everything eventually.

Favorite Car(s)
So let’s cut to the chase.  Every year I pick one or more vehicles that I wouldn’t mind having; typically something new to the scene.  This year I’m picking two, based on environment or purpose of use.  I’m not suggesting that I could necessarily afford either of these vehicles, but they aren’t priced out in the stratosphere like some of the more exotic cars there.  So here goes.

Favorite Red State Vehicle
If I were currently living in what some would describe as the Land of the Free – interior US states with a lower population density, excluding Illinois – I’d love to drive the 2014 Chevy Silverado 1500 High Country 4×4, exhibited on the Javits floor in White Diamond Tricoat paint.  The perforated brown saddle leather interior of this crew cab is remarkably comfortable, and doesn’t try too hard like some pickups adorned in a Western motif.  Thankfully the Silverado High Country on display didn’t include the optional sunroof, which I took issue with on other vehicles sharing the same chassis.  (We’ll talk about that later.)  The only thing standing between this 2014 Chevy Silverado High Country and perfection is the fact that Chevy doesn’t offer it in the purer Summit White paint available on their other trucks.  That, and the window sticker of $53,860.00, which puts in the same neighborhood as the aggressively capable Ford Raptor.

Favorite Blue State Vehicle
These days I live so close to the East Coast that the water next to my building goes in and out with the tide.  I park in one parking garage at home, and another at work.  And in this setting, I’ve become a fan of just-big-enough small sedans.  As long as there are seating positions where my 6’ 5” self and an occasional front-seat passenger are comfortable, I don’t otherwise care how small the overall car is or whether there’s enough room in back for anything larger than a laptop bag.  So, when Audi announced that an A3 sedan was coming to America, they got my attention.  And it turns out they brought the perfect one to the show.

The Audi A3 TDI sedan in Glacier White Metallic with Titanium Gray interior and 18” 10-spoke wheels really is an extraordinarily attractive small sedan, inside and out.  The car’s new enough that I’m not finding the turbodiesel version on audiusa.com just yet.  It’s safe to assume that the A3 TDI will use a variant of the same engine that powers diesel versions of the Volkswagen Golf, Jetta, and Passat.  After playing with the driver’s seat adjustment, I found a seating position in the A3 sedan where I could see myself remaining comfortable for extended periods of time.  The interior of Audi’s A3 sedan is tasteful and relatively clutter-free.

Audi’s A3 sedan seems well-positioned to compete with the similarly-sized Mercedes CLA250, Lexus IS 250 and Acura ILX, with gas mileage from the diesel variant likely to edge out all but the hybrid version of the Acura ILX, and then perhaps only in city driving.  There was no price tag on the A3 TDI sedan at the auto show, but of all the cars there, this is really the one that I’d most like to integrate with my current lifestyle down the road.

Other New Stuff
Of course the New York International Auto Show is an opportunity to see the first public examples of several new cars every year.  This year, for instance, Acura introduced their new TLX sedan, which replaces both their outgoing TSX and TL sedans, thereby filling the gap between Acura’s baby ILX and large RLX.  Combining the old TSX and TL into one car is a bit of a challenge, as the two covered a fair amount of ground in both sizing and pricing.  It’s my understanding that the new TLX maintains roughly the wheelbase and interior dimensions of the larger of the two cars it replaces, while giving up four inches of overhang to compromise on exterior dimensions.  Acura includes the engine options of both former cars: a 2.4 liter inline-four and a 3.5 liter V6, mated to new 8 and 9-speed transmissions respectively.  While show attendees couldn’t sit in the TLX this year, it seems well-positioned to serve as Acura’s mid-sized sedan for several years to come.

The Soliloquy slideshow at the top of this article also includes the Ford Mustang 50th Anniversary Edition, the new Porsche Macan and Jaguar C-X17 SUVs, the Lexus RC 350 and Cadillac ATS coupes and the Fiat-inspired Jeep Renegade.  Unfortunately, in the case of the Renegade, the only thing they appear to be rebelling against is Jeep’s legacy of off-road prowess.

GM’s Large SUVs
This year we get to see GM’s refreshed SUV lineup that shares the same platform as last year’s all-new Silverado and Sierra pickups.  Every sample of the Chevy Tahoe and Suburban, GMC Yukon and Yukon XL and Cadillac Escalade there were well equipped, and it showed.  A black Chevy Suburban LTZ on display stickered north of $70,000.  With a pricetag like that, these vehicles may well be the exclusive domain of doctors, lawyers and well-paid executives, and not the Midwestern families that historically drove them.  I also have a major bone to pick with GM on the basic design.  Every large SUV there was equipped with a sunroof that opens below the roofline, subtracting space from the interior headroom of the vehicle.  In each case, I found myself having to make significant adjustment to the drivers seat to avoid hitting my head on the artificially low roof.  One might feel like they should be able to hop into a Tahoe or Escalade without having to lower the seat significantly from the default position.  Needless to say, I’d advise full-grown men to steer clear of the $995 power sunroof option.  Given the significant expense of these vehicles, and the fact that a brand new Yukon burst into flames while on a test drive in Anaheim recently, it may be wise to avoid these vehicles altogether.  At least for this year.

Let’s Talk Tech
Not long ago, it felt like many vendors at the New York Auto Show were trying to use technology for technology’s sake in their displays.  In 2012, every car had to have an iPad-powered display sign in front in order to pass muster.  These days, the sales professionals still tote iPads, of course.  And some of the vendors still have electronic, interactive displays out in front of their cars.  But just as many are comfortable with traditional static signage that conveys the appropriate information.  And while social media was certainly still mentioned, it didn’t feel central to any major displays, such as Audi’s 2013 use of a giant Tagboard.  If only automakers understood this lesson inside the cars.

The Mercedes S-Class isn’t the only new vehicle with a mostly or fully-digital dash, containing digital representations of traditional analog gauges.  While digitally depicted gauges can sometimes look OK when driving, as soon as you shut off the car, they look like nothing at all.  Just as I’d rather have a mechanical automatic watch than some digital smartwatch with fake hands, I’d rather have a traditional speedometer and tachometer on my auto dashboard.  And I’m going to be particularly unhappy if digital gauges move down-market and get implemented with noticeably less attention to detail, a la, the Cadillac ELR.  If this makes me an old fogey in my late 30s, I’m OK with it.

Leaving
While my day started out at a relatively crisp 39 degrees Fahrenheit, by mid afternoon New York was at 66 degrees under a mostly sunny sky.  As I left the Javits Center, I again hailed a cab on 11th Avenue for the ride back to Grand Central.  My taxi was a yellow Toyota Camry with all the windows fully down, offering a steady dose of what passes for fresh air in Midtown.  Ironically, considering I’d spent the day at the auto show, my afternoon taxi driver reminded me why I’d never actually want to own a car in New York.  He used every scrap of pavement not currently occupied, disregarding lane markers and stop lights in order to get one or two cars ahead of where he might have otherwise been.  He was driving binary: full gas, followed by full brakes.  At one point, a non-descript black Ford Taurus tried to pull out in traffic.  My taxi driver denied it as he raced past, at which point the Taurus flashed on red and blue lights.  But it came to nothing, and I arrived at Grand Central in literally the fastest time possible.  The subsequent train ride east had none of the same excitement.

Final Thoughts
The New York International Auto Show is always a fun way to spend a day.  One doesn’t have to love everything about the current state of the auto industry in order to find plenty to see and learn.  With so much there, everyone’s experience is sure to be different.  I was fortunate to take a break from the routine and make the visit.  This year’s show runs through Sunday, April 27th.  Tickets are $15 for adults, and $5 for children under 12.  For more information, visit autoshowny.com.

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookShare on RedditEmail this to someone

iOS 7.1 Mail App Encrypting Certain Replies Inappropriately

Encrypted Reply

‘Reply All’ Via iOS 7.1 Mail App


 
We talked recently about Apple mostly fixing one bug related to S/MIME encrypted email messages with their release of iOS 7.1.  Now it appears that they may have another.

Normally when using S/MIME email signing and encryption, the Mail app will indicate a blue lock icon next to any recipient with whom you’ve previously received a signed message and installed their public certificate.  Any recipients for whom you don’t have a certificate installed, and are therefore unable to exchange encrypted mail, are shown in red with an unlock icon next to their name.  It’s perhaps the most visually intuitive S/MIME implementation out there.

If you’ve previously installed a certificate for every recipient on a given message, Mail will indicate that the message is Encrypted at the very top, again accompanied by a blue lock icon.  If there are any recipients on a message for whom you don’t have a certificate, the message will normally drop back to Not Encrypted at the very top and show the red unlock icon.  You would never want to send out a single encrypted message to a group of people such that only certain recipients had the means to decrypt and read it.

At the Connecticut-based healthcare practice where I work, we collaborate with outside technical people all the time.  In day-to-day e-mail exchanges, I’ll frequently receive e-mail messages from third parties where the sender chooses to include or carbon copy our CIO or one of my IT colleagues.  More to the point, someone who doesn’t use S/MIME signing and encryption will send me a message where they carbon copy someone who does use S/MIME.  Often the topic will merit a response, where I begin by hitting Reply All.

The iOS 7.1 Mail screen captured at the top of this article should not be possible.  You’re looking at a Reply All where I have no S/MIME certificate for the To party, but I do for the Cc’d party.  Yet the overall message status at the very top still indicates Encrypted.  If I began a new e-mail message to the very same recipients, the message status would be Not Encrypted.  This inadvertent Encrypted status on Reply Alls to a mixed group of recipients isn’t just a visual problem.  If sent, the reply message actually goes out encrypted as promised, such that only the name in blue will be able to read it.  Red recipients will get an smime.p7m attachment that they can’t do anything with.

Curiously, I’m only able to replicate this problem situation when doing a Reply All from my Microsoft Exchange e-mail account.  I’m unable to duplicate it when replying from Google-hosted IMAP accounts.  I should mention that while I’ve seen it happen from multiple devices running iOS 7.1, the device from which I recreated this scenario is an iPhone 5s that was completely wiped and reloaded after having been upgraded to iOS 7.1.

The work-around, for now, seems to involve avoiding the Reply All button.  If you instead limit yourself to choosing Reply, and then manually add the other recipients back, the message status seems to behave appropriately.  It’s an unfortunate extra bit of work, with the potential to stand in the way of wider S/MIME use in iOS-centric enterprises.

This information has been submitted to Apple on case number 593916475.

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookShare on RedditEmail this to someone

Register Your Site With The Web Filter Companies

Trend Micro Site Safety Center
 
Among the many simultaneous technical projects at the Connecticut-based healthcare company where I work, we’ve rolled out a fairly significant medical imaging solution providing mobile and web access for referring physicians and others.  For aesthetics and marketing purposes, we chose to launch this Internet-facing platform using a new dot-com domain name rather than use a subdomain of our existing web presence.  From a technical standpoint, all of this is very straightforward so far.

Recently we began hearing that our new domain name and web site were being blocked by the web filtering products used at two hospitals, one of which may be the most well-known health system in the state.  So I began talking with the technical folks at the first hospital system.  Initially I was told that we’d need to secure the signoff of one of their Department Heads or Vice Presidents in order to get an exception added to their web filter that would allow their users to access our site.  Of course I found it a bit curious that they would trust the algorithms and definition files of a faceless security vendor over the judgement of their rank-and-file staff.  At any rate, they eventually relented and granted the exception.

Meanwhile, it occurred to me that most hospital systems, corporations and schools trust software from companies like Websense, Barracuda and Sophos to properly scrutinize and categorize web content and either block or allow it.  An internal administrator using one of these products typically allows or blocks whole categories of content at a time rather than concern themselves with individual sites.  They might allow news or healthcare categories while blocking access to gambling, pornography or hate speech.  So I decided to go to the source(s), and try to get our new site properly classified.

The following is a list of the web security vendors that I contacted, hyperlinked to the relevant page as of the date that this article was posted.  Feel free to add additional web security vendors as comments.  Bottom line, after launching any new web site, it may be worth a few minutes to contact these services that act as gatekeepers within thousands, perhaps millions of organizations.  And if you hear that your site has been blocked, try to identify the product that is blocking it, and work directly with that security vendor for a resolution.  This effort will have a much wider impact than trying to work with the IT team at every individual institution that can’t access your content.

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookShare on RedditEmail this to someone

Encrypted E-mail Attachments Fixed in iOS 7.1?

Encrypted e-mail message with attachment on iOS 7.1

Signed and encrypted e-mail message.


 
S/MIME (Secure / Multipurpose Internet Mail Extensions) is one of two main methods of securing the content of e-mail messages between sender and receiver, regardless of the networks and servers that the message traverses along the way.  While S/MIME includes other functions, such as message integrity and non-repudiation, we’re going to focus on encryption today.

Where Can I Find It?
Though a small percentage of the general population are aware of how to implement and use S/MIME signing and encryption, the technology itself has been natively supported in most e-mail clients for some time.  Programs that support S/MIME include Microsoft Outlook, Mozilla Thunderbird, Novell Evolution, Apple’s Mac Mail, the iOS Mail App (in iOS 5 and later), and a small number of Android apps.  While perhaps used infrequently in the real world, S/MIME support is ubiquitous to the point that it would be hard to find a situation where it couldn’t be used if desired.

You Said iOS?
And then iOS 7 came along.  We talked here in late September about a problem that plagued Apple’s then-released iOS 7.0.  Incoming e-mail messages that had been created using S/MIME encryption, and which also carried file attachments, would often render those file attachments as pulsating and inaccessible when viewed on iOS 7 devices.  It wasn’t a universal failure, as encrypted messages created using Mac Mail on Mac OS seemed reliable.  Messages created using Microsoft Outlook and then read on iOS 7 – a scenario common in business – were the most prone to exhibiting the pulsating problem.  While the S/MIME attachment issue was brand new with the introduction of iOS 7, it persisted in nearly the same form for the next 5 months and 18 days, as we ran updates 7.0.2 through 7.0.6.

Who Cares?
To put the scope of the S/MIME attachment failure in perspective, my September blog post on the subject has pulled in 4,855 pageviews, representing 28% of the total traffic to this small blog since.  Readers have come from 1,665 networks, including those of well-known companies, government agencies, universities and medical institutions.  They’ve been a geographically diverse bunch, spanning 88 countries, from Andorra to Yemen.

At The Office
Closer to home, Apple’s S/MIME attachment handling problem was one hurdle standing in the way of potential wider adoption at my employer in Connecticut.  Using S/MIME under normal circumstances, I might automatically encrypt every outbound message to recipients with whom I’ve previously exchanged a signed message.  Since iOS 7’s release, however, I began having to make assumptions about whether the recipient might need to view an included attachment from an iOS device, and then send the messages in the clear to accommodate easy reading.  Such a limitation has glaring security issues, of course, and also places too high a burden on non-technical end users.  The technology needs to just work.

Fixed
As of this past Monday, it almost just works.  Following Apple’s release of iOS 7.1 on March 10th, I quickly upgraded an iPhone 4, an iPhone 5s and an iPad 3.  I should probably mention that I updated them using the ‘sync with iTunes’ method, rather than over-the-air.  From these iOS 7.1 devices, I’m now able to read PDF, XLSX and DOCX attachments on S/MIME encrypted messages sent via Outlook 14 / Office 2010.  Almost always.

New Issue
In cursory testing following the iOS 7.1 upgrades, I quickly saw at least three occasions where an attachment on a new encrypted e-mail message appeared to bear the filename of a previously-received attachment.  It was as if the messages were being decrypted to a common cache that isn’t always cleared properly after use.  In these rare instances where the wrong filename was presented for the attachment, opening the attachment was hit-or-miss.  I’ve only seen it happen three times so far, but anything less than 100% reliability doesn’t denote a complete fix.

Possible Resolution
So I called Apple to establish Case ID: 588543752.  The total call lasted 43 minutes, and I was quickly escalated to a Senior Advisor.  He took down my information to pass on to engineering.  Though I thought it unlikely to help at the time, I promised to wipe an iPhone 5s clean, re-apply my configuration profile, and confirm that I could still re-create the problem afterward.  Since wiping the iPhone 5s and setting it up from scratch, I’ve been unable to reproduce any problems with S/MIME encrypted messages bearing attachments.  This may turn out to be one instance where wiping an iOS device following a major upgrade actually does some good.  Stay tuned for more.  It’s never boring.

Tweet about this on TwitterShare on Google+Share on LinkedInShare on FacebookShare on RedditEmail this to someone