Posts tagged geeky

ipHouse Dot Logo

Adding Exchange 2010 mailboxes from text file with PowerShell

I wrote before about adding Exchange 2010 mailboxes with PowerShell and AWK. I was having some trouble with the syntax of importing from a .csv or tab-delimited file so I punted and used awk on my workstation and got the work done.

That workflow is not ideal. I’d rather do it all in PowerShell. I got some great help from the fine folks over at /r/powershell and Don Jones’s PowerShell books and videos.

Here is a better way:

  • Use the Import-Csv cmdlet to import the data as an array objects with text properties, for each column.
  • Add and adjust the properties we need and their values.
  • Pass the whole array to New-Mailbox, which will do the right thing, as long as all the parameter names match the object properties.

If I exported the data as .csv, with properly named column headers, this would get even easier, but I will give PowerShell the same data I gave awk for the sake of parity. So let’s say I have no control over the format the data arrives in and it comes space-delimited like this:

Alice Adams aadams aadams@corp.domain.com Password1
Bob Baker bbaker bbaker@corp.domain.com Password2
Charlie Carter ccarter ccarter@corp.domain.com Password3
Dan Davis ddavis ddavis@corp.domain.com Password4
Ed Evans eevans eevans@corp.domain.com Password5
Frank Foster ffoster ffoster@corp.domain.com Password6

Here is how to use PowerShell to add these users using the data from this file.

To use a space for the field delimiter, we’ll use -Delimiter ‘ ‘. This file does not have a header row. Import-Csv imports as key-value pairs, so each column needs a name.  By default, it uses the top row for that, but that would not be the right thing to do here, since the top row is data.  So we can either put a header row on the file, or define alternate column names with a -Header argument.  Here is the command import my users.txt file as an array of objects, $users:

PS> $users = Import-Csv -Delimiter ' ' -path .\users.txt -Header FirstName, LastName, SamAccountName, UserPrincipalName, plaintextpass

This loads the data from the file into an array of objects $users.  Each element of $users has properties as defined in the header with values from the corresponding row.  Here’s the first element in $users:

PS> $users[0]

FirstName         : Alice
LastName          : Adams
SamAccountName    : aadams
UserPrincipalName : aadams@corp.domain.com
plaintextpass     : Password1

Next, we’ll add the “Name” property, which is a string in the form “FirstName LastName”

PS> $users = $users | Select-Object -Property *, @{name='Name';expression={$_.FirstName + ' ' + $_.LastName}}

The property is appended to the end of the list, but that’s fine, since Add-Mailbox accepts these arguments in any order. Here’s how the first object looks now.

PS> $users[0]

FirstName         : Alice
LastName          : Adams
SamAccountName    : aadams
UserPrincipalName : aadams@corp.domain.com
plaintextpass     : Password1
Name              : Alice Adams

Add-Mailbox wants the password as a system.securestring, and won’t accept a plain string at all. Items of type System.SecureString is stored in memory encrypted.  We’re defeating the security benefits of that behavior by handling the passwords as plaintext elsewhere in the script and in the data file. For exactly that reason, ConvertToSecureString will complain if we use it to accept plain text with -AsPlainText, but it will do it anyway if we use -Force.  The whole thing goes like this.

PS> $users = $users | Select-Object -Property *, @{name='Password';expression={(ConvertTo-SecureString -AsPlainText -Force -String "$_.plaintextpass")}}

Now we have the password stored as a SecureString.  Trying to print the password only prints “System.Security.SecureString” and not the actual contents, but it is in there.

PS> $users[0]

FirstName         : Alice
LastName          : Adams
SamAccountName    : aadams
UserPrincipalName : aadams@corp.domain.com
plaintextpass     : Password1
Name              : Alice Adams
Password          : System.Security.SecureString

Now let’s get rid of that plaintext password.  Strictly, this step is not necessary. Since “plaintextpass” does not match any of the arguments that Add-Mailbox accepts, it will be discarded.  But since we need to encrypt the password as a SecureString to pass it anyway, why pass it as plaintext as well.  So we strip the property out like this:

PS> $users = $users | Select-Object -Property * -ExcludeProperty plaintextpass

And finally, our objects look like this:

PS> $users[0]

FirstName         : Alice
LastName          : Adams
SamAccountName    : aadams
UserPrincipalName : aadams@corp.domain.com
Name              : Alice Adams
Password          : System.Security.SecureString

It is not an accident that these are exactly the arguments that Add-Mailbox wants.  This is the fun part.

PS> $users | Add-Mailbox

That’s it. The contents of the properties of each object in $users are passed to the corresponding arguments Add-Mailbox accepts.  Add-Mailbox takes those arguments and creates six new users.

And of course, since this is powershell, all of this can be done in one big pipeline if readability is not your thing.  That would look like this:

PS> Import-Csv -Delimiter ' ' -path .\users.txt -Header FirstName, LastName, SamAccountName, UserPrincipalName, plaintextpass | Select-Object -Property *, @{name='Name';expression={$_.FirstName + ' ' + $_.LastName}}, @{name='Password';expression={(ConvertTo-SecureString -AsPlainText -Force -String "$_.plaintextpass")}} | Select-Object -Property * -ExcludeProperty plaintextpass | Add-Mailbox

This Old Code

Although revisiting and updating existing code isn’t necessarily fun or an obviously lucrative way to spend your limited time, it can certainly pay dividends. I know my personal knowledge, skill, and experience have changed over the years, and code which seemed perfectly good six years ago can be painful to read now. Perhaps you’ve gained new appreciation for readable code in general. Or limiting how deeply you nest your conditional blocks, or avoiding incomprehensible loops six pages long. Regardless, code which is easy to read and understand is easy to maintain, and has fewer bugs.

Sometimes, its not your skill which was necessarily at fault, but your environment. Perhaps the code has simply outgrown the original project scope, and become littered with references and obscure exceptions which were bolted on later. Reconsidering and refactoring the code is a necessary step to regaining control of the chaos. Or the original project simply didn’t afford enough time for development, and you had to leverage existing code which didn’t quite fit. Our own ipMom account interface started life as PostfixAdmin, which was quick and easy to put into production, though you wouldn’t be able to tell anymore.

Finally, programming languages themselves change. New libraries are added, and new functions which can make your code leaner and cleaner overall. With relatively new programming languages, its easy to have code which predates any developed or widely-used best practices for the language. Bringing the code up to spec now will make it easier for you, and easier for others, to maintain it in the future.

ipHouse Dot Logo

Getting Moar Dropbox

Dropbox is a service which I think most of us at ipHouse use, and we don’t mind plugging it now and then. They’re testing a new camera upload feature and users who are willing to test it out for them can get some additional disk space for their trouble. So this…

http://forums.dropbox.com/topic.php?id=57860

plus this…

plus this…

results in this…

ipHouse Logo

Speeding up JS, Playing Well with Others

Another common trick for speeding up JavaScript on a web page is offloading some of it to other sites. Google Analytics and Google Ads are probably the most well-known and ubiquitous. Google also provides the Libraries API, a content distribution network (CDN) for jQuery, Prototype, and other common frameworks, and hosts small projects such as ie7-js. Facebook and Twitter offer their own frameworks for integrating with social media. Then there’s Typekit, which lets you load web fonts from their site, etc.

Since most web browsers limit the number of concurrent requests to any particular host, loading scripts from other sites can get your website to load from your server faster. For some sites, it’s also the best way for them to offer an integration API, with minimal concern for breaking old code (since they control it all).

However, there can be a downside. Loading lots of scripts from other servers makes your website more dependent on those servers, which are outside your control. If any of them are down or inaccessible, it affects the performance and usability of your website as well. I’ve encountered this more often as content offloading has become more common, and its pretty annoying when a page stalls completely for want of some trivial bit of fluff.

What do you think? Does offloading lead to a faster and more usefully integrated web? Or to a house of cards, ready to topple at the first server outage anywhere?

ipHouse Logo

Running suEXEC + FCGID

A long, long time ago (Internet years), our webmaster (and Mike) changed the ipHouse web-cluster to run PHP via FastCGI. They did this with the thought that FCGId would offer greater performance and stability while offering the same security as running PHP via the CGI interface.

Around the same time I also tried implementing FCGId in Ubuntu on one of my virtual servers. It worked well, but I thought it was a bit verbose. Recently, I took on the project to set up FCGId for a managed customer. I decided to ask our webmaster how he implemented FGCI via the FCGIwrapper primitive and still get suEXEC to work.

(more…)

Go to Top