Thursday, March 26, 2015

Active Directory and OS X client issues and workarounds

So I'm perusing my email today, when I get this tidbit:

And I think to myself, yeah, that's a pretty good title.

We utilize Casper at my place of work. We also have Active Directory. They both provide different things to Mac clients, but are separate silos of service. So here's a quick couple things about Active Directory with OS X clients I've experienced over the last 8 years in an enterprise environment.

Recently we've been experiencing an issue with our Mac clients failing to renew their client keys with Active Directory. Basically, we bind a client, then at some point when they request their client key update, AD fails to recognize the key presented by the Mac, and within 24 hours the Mac will no longer receive any data response from Active Directory. It still *looks* bound with the basic tests, but it isn't actually reading any data from AD. How can you tell quickly and easily? ID a user in Active Directory via Terminal. A local user account will have something like this as a response:

macosclient:~ admin$ id admin
uid=499(admin) gid=20(staff) groups=20(staff),12)everyone),61(local accounts),80(admin),33(_appstore),90(_lpadmin),100(_lpoperator),204(_developer),,398(com.appleaccess_screensharing),399(com.apple.access_ssh)

An Active Directory domain account will have a very different response:

macosclient:~ admin$ id itadmin
uid=191571970(itsupport) gid=951704675(MYDOMAIN\Domain Users) groups=951704675(MYDOMAIN\Domain Users),191535401(MYDOMAIN\Everyone - GAL access),1700667532(MYDOMAIN\Exchange Admin),12(everyone),62(netaccounts),1163998572(MYDOMAIN\AD Delegation),461668394(MYDOMAIN\Citrix-OfficeDemo),675762013(MYDOMAIN\Techserv),1066250157(MYDOMAIN\DesktopAdmins)

So if you enter a known Active Directory account and get a local account response (or no response) you know that the computer isn't actually getting a data feed from Active Directory even though it appears to be a bound AD client. You will need to unbind and rebind the Mac to resolve this issue, ideally removing the Active Directory computer object manually (because a broken bind prevents the AD plugin from cleaning up on it's own). In addition, after some troubleshooting (that wasn't as helpful as I hoped), we ended up changing the client password/key interval to never request a refresh:

sudo dsconfigad -passinterval 0

Although opening a bit of a security hole (because our Mac clients will never request a refresh from AD), this has resolved the more business-impacting issue of AD users unable to log into Macs with current credentials. Which leads me to the second piece of this AD discussion.

When naming an AD client computer, think old-school DOS naming rules, and stick to them. Active Directory won't prevent you from leaving a Mac name like "John Smith's iMac" when binding to AD, but you will get an actual device name like "john-smiths-imac$" which isn't the same client name. Much like any system in a traditional network, you should change the name to be something friendly to DNS, so 13 characters or less and no funny business. We've found that seems to improve AD communication stability of Mac clients when bound to our domain.

In addition, recent versions of OS X Active Directory plugin do a great job of communicating and registering Dynamic DNS client IDs. Unfortunately, if you are using any iteration of a Mac Pro, you have the possibility of using two separate network ports for two separate connections, and having two duplicate DDNS ids. This can cause issues when using StorNext or Xsan with a separate metadata subnet on the secondary ethernet port. This one is also easily resolved with the AD plugin:

sudo dsconfigad -restrictDDNS "en1"

This command will restrict dynamic DNS updates to the second ethernet port only. There are some details I've left out (like determining which port to restrict to) but I'll leave that as an exercise to the reader.

Those are some small tidbits of managing Active Directory features in OS X. There are a whole slew of other things you can do with the AD plugin, which you can learn about in depth with the man pages available for dsconfigad.

Next time, I'll talk about how to integrate Active Directory binding into Casper Imaging sequences.

Monday, November 24, 2014

Statistics and Fear Mongering don't tell you where the threat might lie.

I subscribe to a well-known security email newsletter. This morning one of their top articles caught my eye: "Most of the top 100 paid Android and iOS apps have been hacked."

STOP. Do not google this (yet). Let me finish, and then you can give the authors the page views they so desperately want.

A company that sells services to securely encrypt code (at least it appears to be their business model) has released a report about how many of the top 100 applications (paid and free) for both Android and iOS have been "hacked." So after reading the article (press release, actually), I dug a bit deeper, and then watched a very informative video about this issue, provided (conveniently) by the authors. This is where I want you to think twice about giving them their ad revenue and page views. After all, the devil is in the details.

So a hacker downloads (from the App Store) an application. They then back it up to a computer, or jailbreak their iOS device to copy the app package to a computer. Then, using well-known tools they are able to crack the encryption package on the application (as long as it is resident in system memory) and trace the code as the application runs. This does present one important concern for the vast majority of iOS app users - if there is an available exploit via Man-In-The-Middle attacks on communications between the app and a server, the hacker will be able to utilize that gateway to acquire data that is passed between the server and the iOS app (even if it is the official app on the App Store). This, of course, is going to be app-specific - so an app that doesn't transmit data to a private server isn't really a vector, but (for example) a banking app that communicates using SSLv3 would be. Of course, there are layers here. Assuming the data is encrypted (as it should be) and the public/private keys are still secure, then all the hacker has is a bundle of encrypted data that can't be decrypted until the keys are broken or stolen. And I'm paranoid, so let's assume that no small percentage of these apps still use hacked protocols or don't encrypt data at all. Your data that is transmitted from that application to the server is at risk. But here's the kicker. There's no discussion about this portion of the threat at all in the article or at the website of the authors. They are selling tools to encrypt your code and app, not data transmission between the app and a server.

So it's not like the application is hacked, code injected to steal additional information from your phone, and then somehow you get that hacked version of the app on your phone. Unless you jailbreak your device and then download duplicate copies of popular applications from an outside app store (such as Cydia). However this vector is also legitimate, assuming the hackers can inject code into the bundle, re-package it and then distribute it as a free alternative to the actual software from the actual vendor via a jailbreak app store.

In conclusion, the security article is fear mongering at it's best, and uninformative at worst. If a user does not jailbreak their device, and does not download applications from jailbreak stores, then the threat comes down to the overarching security of the app vendor, and what data they have access to on your phone. So the majority of the top 100 apps for iOS and Android have been hacked, and if those applications transmit secure data insecurely between the app and a server, you could be at risk. That's the real takeaway from this article, which you shouldn't go read unless you like sensational statistics.

Here it is.

Monday, June 11, 2012

WWDC 2012: What should have been...

What should have been...

Monday, June 11, Apple, Inc. announced the next generation of their flagship Mac Pro desktop computer. Left untouched for nearly two years, today the Mac Pro received the update many professional users have been looking for.

First up was an all new machined aluminum enclosure. Designed to bridge the gap between a desktop computer and a server, the new Mac Pro can be installed in a standard 19" rack with the optional rails kit. Although Apple has repeatedly expressed their desire to "get out of the datacenter," they appreciated the need of some professional video and audio users to utilize existing rack space in professional editing suites.

As expected, the new Mac Pro utilizes the Intel® Xeon® E3-1275 processor. Apple has partnered with both nVidia® and AMD® to offer professional Mac users three unique graphics options: the ATi Radeon HD 7850, as well as the nVidia GeForce 670 and Quadro 5000 cards. Utilizing up to 32GB of ECC 1600Mhz DDR3 RAM, the Mac Pro includes three Thunderbolt ports (two rear one front), 5 USB 3/2 ports (3 rear 2 front), 4 Gen3 PCIe slots  including one double-width 16x slot, and 3 4x slots. For storage, the machine includes four independent 6gb/s SATA ports, and a slot-loading Superdrive. The default configuration includes a 512GB SSD drive.

The removal of the tray-loading optical drive and the second optical drive bay has some professionals questioning whether Apple will ever support Blu-Ray burning in natively in Mac OS X. Surprisingly, Apple maintained one Firewire 800 port and two Optical digital audio input and output TOSLINK ports on the new Mac Pro. Dual Gigabit Ethernet ports, as well as 802.11 a/b/g and Bluetooth 4.0 round out the connectivity options on the new Mac Pro.

Apple also announced that all Mac Pros would now include Mac OS X Lion server "the simplest server OS in the world."

Monday, March 26, 2012

On Disk Arrays and Warranties: G-Speed eS power failure.

(note: I will be uploading pictures during the week of March 26-30)

A number of years ago, G-Technology was an independent drive manufacturer, with an aesthetic that appealed to Mac users (brushed metal housings). They made great devices, and during my tenure as a Mac Genius at Apple, I recommended their products without hesitation time and again. I used them myself, with 2 G-Drives, 2 G-Drive Minis, and (originally) the G-RAID. My support of G-Tech was cemented when the G-RAID had a failure with the controller chip. G-Tech couldn't get replacement boards or chips (the manufacturer had stopped making the components), and they were very gracious in my options for replacing the device. I was a proponent for the smaller business treating the customer right, and their customer service attitude reflected the attitude I and my genius team at Apple practiced. In the end, they replaced my G-RAID with a 3TB G-Speed eS as soon as the product was released in mid-2008. This was their top of the line eSATA storage, the next step up would have been a fibre channel array (and I couldn't justify buying a $1500 fibre channel card even though they offered me that array as an option during the 6-month wait for the release of the eS).

For the next 4 years (almost to the day) the G-Speed eS has served me well as a high-speed RAID 5 array. I keep raw footage on there for my late-night hobby of visual effects, as well as classes and footage from fxPhD completed projects. After all, as a RAID 5 array, I have at least some warning when a drive fails to actually generate a backup of anything that's live and urgent, right? As I learned last month, that was wrong.

I woke up one morning to a strangely silent workspace. The ever-present hum of the G-Speed fan was gone. I checked the power, the cables, and my Mac - all seemed in order, but the drive would not spin up. I did some basic troubleshooting (I was a Mac Genius once, after all, I have a bit of troubleshooting skills), and determined I had a power problem. The device wouldn't start up, but the power light would flash on (then turn off). If I removed all the drives, it would start up, the fans would spin, and no error lights. The web console showed only the error of 4 missing drives and a missing array when I checked the chassis. I searched the web for similar issues, and found this post at Creative Cow. So I contacted G-Technology support via their web-based form.

48 hours later, I hadn't heard back, so I gave them a call. I provided my serial number, informed the technician I was aware that I was out of warrantee, and described my problem. His response?
"Sorry, you are out of warrantee. I can't help you."
I was trying to be calm. The data on the drive was (mostly) backed up elsewhere, so I wasn't worried about data loss. I asked if they could sell me a power supply, since that was the most likely failed component.

"We don't sell parts. You could buy a new drive."

At this point, I got a bit upset. I asked if I could buy a chassis (no drives) since I had 4 perfectly good drives.

"We don't sell the chassis without drives. You could buy the cheapest version and pull out the drives if you want."

Well, I was done. G-Technology's once amazing customer service had degraded to the point of "don't help, sell." Frustrated and finished, I expressed my disappointment that a once-great company was now a shell of it's former self, and hung up. I went to the internet, and began searching for information on G-Speed eS failures. I found nothing (hence this blog post). My next step was to examine the power supply and see if I could replace or repair it, so I cracked open the rest of the case.

[insert disassembly pictures here]

After removing the six screws at the bottom that held the brushed metal housing to the internal frame, I was relieved to find a third-party manufacturers label visible on the power supply. The G-Speed eS used a 250W Enhance ENP-7025B Power Supply. A quick search led me to a number of suppliers, and (surprisingly enough) they all listed the G-Speed eS as a compatible device. So I ordered one for about $50 USD from Amazon.com.

A few days later, it arrived, and I unboxed it only to discover the replacement power supply had a standard ATX power connector and a number of drive connectors, but not the same cabling as the G-Speed eS. I contacted the vendor (third party through Amazon) and they explained that I needed at ATX to AT converter to get the spade pin-outs on the power supply that were required for the G-Speed eS. They were also gracious enough to send one to me free of charge. A few more days passed, and the AT converter cable arrived. I did a dry-run connection, and sure enough, the power supply worked, the chassis powered up, and the drives spun up. The only problem was the sheer mass of cabling from the ATX connector and then the almost 24" of AT cabling. So I pulled out the soldering gun, the heat gun and some heat-shrink cable shielding (rather than just electrical tape. I tagged the two (yes, only two) cables I needed from the AT cable housing, spliced them with the original spade heads from the original power supply, and trimmed the rest of the ATX cable assembly down to about 3" long stubs. I then capped all those cables with non-conductive caps, and installed my replacement power supply.

[insert picture of cable ends here]

Reassembling the case, plugging the drives back in, and powering up the array (and then my Mac) I was thrilled to find my volume intact, my data intact, and my gSpeed eS usable again.

In the end, I was able to resolve this myself, with a bit of ingenuity, experience, and faith in my own skills. The real take-away from the experience though, is that G-Technology is no longer a company I would recommend. Their products may still be good, and they have a decent warrantee, but their customer service is almost as bad as AT&T. It's silly, really, that one simple thing would have kept me as a purchasing customer, and a promoter for G-Technology. If the technician on the phone would have directed me to Amazon (or some other online reseller) with the information on what I needed for a power supply, or even provided some suggestions beyond "Sorry, spend at least $1500 on a new device" I would still recommend their products. But I can't.

Thursday, December 15, 2011

The amazing lack of understanding and SOPA

Today is December 15th, 2011, and here in the US the Judiciary Committee of the House of Representatives is debating House Judiciary Committee Markup of #SOPA: H.R. 3261, the Stop Online Piracy Act.

As brave and forward as that name states, as I watch the proceeding I am dismayed and shocked at the lack of understanding of our representatives to the issues being brought up. I will simply say this. These people are not educated enough to make a decision, one way or another, on this bill, and with that said, they should not be moving it through committee.

The sheer lack of understanding of DNS (domain name service), dynamic IP allocation, DNS to IP variability and the potential disastrous effects of this bill that seem to be completely overlooked should have been the first thing that suggested these folks aren't educated enough to vote on this issue.

So the question comes (with the tin-foil hat) what or who is behind the impetus to get SOPA out of committee today? Who stands to benefit from SOPA?

Wednesday, October 26, 2011

On "Personal Technology" and the "Next Big Thing"

I have the fortune (or misfortune) of getting Information Week delivered to my door. This year has seen a lot of articles about iOS, Android, Personal Technology, and custom application development. Of course, the executive team at work gets Information Week as well - from the CIO down to the department managers. We have been fairly forward-looking for a while - when Apple released the iPhone 3Gs we were able to support deploying those on a corporate level, and allowed users to bring them into the environment (on a personal level) if they were configured with a profile as detailed in the Apple iOS Enterprise support documentation.

Sadly, we were unable to support droids for a while, since they didn't even conform to ActiveSync security policy (this has changed, fortunately). This has been going on for a couple years, and for a business that is very concerned with privacy (we are a hospital, after all) it was very forward looking. The hospital did not choose to go the route than many others are on - hiring a team to develop custom applications for iOS or Android connectivity to our various data silos, but that wasn't a problem.

This year, however, has been the year of "personal technology" in the trade rags. And as the hospital desperately wants to be at the forefront of technology, they are pushing for support of personal technology at every level.

I am a fan of supporting personal technology. It helps your employees feel attached to your business. It makes them believe that you care about them, and their wishes. But sometimes, you have to balance that with security, business needs, and legal concerns. A strong leader in IS doesn't just give in to the employees, no matter how important they are, if what they want to do compromises the security of your business, and even more important if it compromises state or federal law. I thought we had a strong leadership team in our IS department. Turns out I couldn't be further from the truth.

Leadership knows one word when it comes to "I want" from the VIP/high-profile users: "Sure!"

Can I bring my personal device to the hospital, put explicitly restricted patient information on it, without any oversight or data security managed by the IS security group?

"Sure!"

Can I buy hardware that is blatantly incompatible with our environment, and connect to the internal network with it, so that I look important at conferences and meetings?

"Sure!"

But when we (as the IS team on the ground with the technology) attempt to raise red flags, or warnings to the leadership team that we need to have some sort of structure around these things, and they have to work (function) within the legal and moral scope of our business, we are ignored, or (worse) chastised for even considering such things.

Personal Technology is the "Next Big Thing" in IS - it makes sense because it can save money, it invests your employees in the business on a personal level, and it can improve the effectiveness of your employees - when your environment is able to support it. Before you take the time to let every smartphone, tablet, and personal device into your environment, you need to evaluate the devices, develop a plan, and implement it in a measured way with continued review to determine if what you are doing works for your employees and your business.

Tuesday, October 25, 2011

SMB issues with Lion Mac OS X 10.7

This is more of a rant than anything. So be prepared.

As you may (or may not) know, my day job is in IS at a fairly large regional hospital group. I was brought on a couple years ago because of my experience with Mac OS X, and their desire to evaluate and deploy Mac systems in a limited fashion. I put in the effort and legwork to get Macs integrated as much as I could, with what support I had from the leadership. For around 2 years it worked OK. Then came the Lion.

Mac OS X 10.7 has issues here. I'm not sure if it has issues everywhere, I guess it depends on how your environment is built, and whether your users are actually using the environment and not just the computer. The first (and biggest) problem is that when Mac OS X 17.0, 10.7.1, and 10.7.2 connects to an SMB share, it doesn't actually mount the share you requested. What Lion does, is parse the share path, and actually mounts the directory containing the share you want to mount. That seems to be confusing (at least to AppleCare), so allow me to explain.

If you want to mount the share myhomedirectory from your SMB server, you enter the path:
smb://mycompanyserver/homedirectories/myhomedirectory
In Mac OS X 10.4, 10.5 and 10.6, the following things happened when you authenticated to that share:
1. SMB mounted the device /Volumes/myhomedirectory
2. The sidebar shows mycompanyserver under shared devices
3. Finder opens a folder of myhomedirectory
4. Mac OS X display the share myhomedirectory on the desktop (if this item is visible).

In Mac OS X 10.7 the following things happen:
1. SMB mounted the device /Volumes/homedirectories
2. The sidebar shows mycompanyserver under shared devices
3. Finder opens a folder of myhomedirectory
4. Mac OS X display the share homedirectories on the desktop (if this item is visible).

OK, now read through those again. When I was on the phone with Apple, they explained that they had changed the behavior of SMB to mount the root share. Except that's not what they are doing. They are mounting the directory one level above the share point you attempt to access.

This introduces all sorts of problems which may be security issues, or just user-friendliness, depending on your environment. In my environment, no-one has any access to homedirectories - so the links on the desktop, in volumes, and in the sidebar are all useless. The only thing useful is the Finder window that opened, but only if you leave it in List, Cover Flow or Icon view, because if you change to Columns it breaks and you can't see your directory contents anymore. If for any reason you close that Finder window, you have to use Go -> Go to Folder... to reopen it.

Of course I opened a ticket with Apple, and got it escalated, but I'm not holding my breath. In the meantime, we get the following issues in our environment:
  • When a user logs into a Lion system, they are presented a dialog listing all the share points in the directory where their network home folder resides. They have to navigate to the network home folder, select it and then continue before the system will log in.
  • If a user connects to a network share, they have to be very careful about the application they attempt to use, because most applications won't be able to parse the path if the entire directory tree isn't at least readable by all.
So Lion is effectively broken in my environment. Which, by the by, has about 15,000 PC systems and about 50 Macs. Which leads to tomorrow's post: On "Personal Technology" and the "Next Big Thing".