Tag Archives: HowTo

Server 2012 “server specified requires restart” loop when installing WID

I was trying to create a RDS deployment, something I’ve done before without issues but this time when trying to install the necessary roles and features I ended up with the reboot loop described. Each time I tried to install, Server Manager complained “The request to add or remove features on the specified server failed. the operation cannot be completed because the server that you specified requires a restart.”

I narrowed the problem down to the WID (Windows Internal Database) feature installation causing the issue. From there I googled and found a MSDN page written partially in another language, hence my own English sharing of knowledge.

http://social.msdn.microsoft.com/Forums/ro-RO/e7e9bc17-14d1-43c9-809c-464f69b366cd/server-2012-windows-internal-database-error-during-installation

The post useful to me was the one by kswail about halfway down. Adjust your domain (or domain controller if appropriate) security policy to allow “NT SERVICE\MSSQL$MICROSOFT##WID” to log on as a service, a GPO setting that can be found under Computer Configuration > Policies > Windows Settings > Security Settings > Local Policies > User Rights Assignment. Simply adding the security principle mentioned above to that policy solved a problem that haunted me for 2 days.

Monitor Citrix License Usage with Cacti on Linux

There is this:

http://forums.cacti.net/about25193.html

It’s useless if your Cacti host is running on a non-Windows host. The suggestion to use wmic is alright, but for some reason the Linux wmic binary doesn’t query properly:

bash-4.1$ /usr/bin/wmic --namespace='root\CitrixLicensing' --authentication-file='/etc/cacti/cactiwmi.pw' //hostname.domain.local 'SELECT Count FROM Citrix_GT_License_Pool'
CLASS: Citrix_GT_License_Pool
Count|DUP_GROUP|FLOAT_OK|HOST_BASED|HOSTID|PLATFORMS|PLD|Subscriptionate|USER_BASED|VendorString6|8|False|0|||MPS_ADV_CCU|20141216000000.000000+000|0|;LT=Retail;GP=720;CL=ADV,STD,AST;SA=1;ODP=0

I haven’t the foggiest why, nor did I care to dig into the source when I’m such a pro at spaghetti stringing weird crap together to achieve a goal. The answer is the ability to run Powershell scripts natively on Linux, but it hasn’t happened yet.

Without describing the thought process that got me there I’ll just describe the final product. In Cacti it you start with a “Data Input Method,” this is what it looks like:

The script is a simple bash script that looks like this:

#!/bin/sh
ssh $1 -l $2 "powershell.exe -inputformat none -noprofile -File \"C:\cygwin64\home\sshd\ctx_license_check.ps1\""

The reason I didn’t run ssh directly from cacti is because passing the parameters isn’t really easy/possible after Cacti does all of its munging on the script. In case it isn’t obvious the script ssh’s to the host specified by argument 1 using the username specified by argument 2 and then runs a Powershell script. Since this script is run by the cacti user I had to configure passwordless (shared key) logon for that user from my Linux host to my license server. In order for ANY of this to work I had to install Cygwin and sshd on my license server, the tutorial I followed is here:

http://www.howtogeek.com/howto/41560/how-to-get-ssh-command-line-access-to-windows-7-using-cygwin/

I created a local user on my Windows box, sshd. That user needed administrative privileges, which sucks but since it’s a local account I don’t care too much as the password for the account can be shredded at this point. Also I did have to go into the WMI security properties for the WMI namespace “root\CitrixLicensing” and grant enable rights for that user there:

The last part to get the data out of the host is the Powershell script. I’m not a Powershell expert, or even a rookie really. This is what I wrote to get the data:

$inuse = Get-WmiObject -namespace root\citrixlicensing -class Citrix_GT_License_Pool | Select-Object -ExpandProperty InUseCount
$total = Get-WmiObject -namespace root\citrixlicensing -class Citrix_GT_License_Pool | Select-Object -ExpandProperty Count
Write-host "inuse:$inuse total:$total"

Suggestions are welcome, it works.

Cacti’s expected return values are something like this:

value_name1:value value_name2:value

My actual output looked like this:

inuse:2 total:6

This isn’t going to be a how to build graphs in Cacti tutorial, so with that I’m happy to attach my graph template and all this. You can import it on your end to see what I did. This is what the final result looks like:

 

Edit:

After installing some more licenses the check broke. My PS1 started returning results like

inuse:0 4 total:5 6

which is representative of 2 seperate license files. It was simple enough to fix as the results from the WMI query com reliably through as arrays – no string manipulation necessary:

$inuse = Get-WmiObject -namespace root\citrixlicensing -class Citrix_GT_License_Pool | Select-Object -ExpandProperty InUseCount
$total = Get-WmiObject -namespace root\citrixlicensing -class Citrix_GT_License_Pool | Select-Object -ExpandProperty Count
$suminuse = 0
$sumtotal = 0
foreach ($c in $inuse)
{$suminuse += $c}
foreach ($c in $total)
{$sumtotal += $c}
Write-host "inuse:$suminuse total:$sumtotal"

Back in business!

Accessing work files from a web browser

There are plenty of solutions out there to give you a “Web File Explorer” – good ones at that.

All of them were lacking the simple components I needed for my client base – I needed the ability to authenticate to AD and I needed to redirect/isolate the users to specific folders based upon who they were.

After trying a few different solutions I crafted one of my own in my head. I knew if I could make IIS FTP server provide the access to the files that I needed I was certain I could either find or write a web-frontend for that FTP server.

This blog isn’t intended to outline the whole process, but rather just list the pitfalls that I encountered and how I got around them.

1) Nginx – All of my web traffic comes through a reverse proxy. Setting max_upload, max_execution and the other components in my webserver’s php.ini is a no-brainer. The following config directives were modified in php.ini:

upload_max_filesize
memory_limit
max_input_time
max_execution_time
post_max_size

what wasn’t a no-brainer was finding the bits of my nginx.conf that were causing me problems. Honestly I still haven’t gotten things JUST RIGHT through nginx and may have to bypass it for this site. I’ve discovered that through SSL there is some sort of bug in nginx that won’t allow the script execution to exceed 30 seconds. Ignoring that problem though I still had to modify the following line of my nginx.conf to suit my needs even for non-SSL usage:

client_max_body_size 1G;

Obviously all of these directives are now set to questionable limits for a production webserver, with such reckless limits on the php/webserver a lot of potential vulnerabilities are opened up

2) AD Authentication and folder redirection/isolation for FTP users is simple in IIS, it’s just not well documented and requires a very specific configuration. The process is as follows:

i – create your website, configure basic authentication and permit the requisite users, this is not complicated

ii – in your newly created site adjust FTP Directory Browsing as follows:

 

iii – Configure FTP User Isolation as follows (even if it seems counter-intuitive):

 

iv – Now the parts that are documented REALLY poorly and extremely important to avoid the following error:

530 User cannot log in, home directory inaccessible.
Login failed.

When you login using an AD account the isolated home folder that IIS FTP server looks for MUST be a virtual directory that is nested in another virtual directory that goes by the shortname of your AD Domain. It should be noted that I found some documentation that appeared to be useful but led me to create a folder structure on the actual filesystem instead of using virtual directories in IIS manager. That method DID NOT WORK. My experience says to create virtual directories for IIS to use, not real NTFS folders.

So, in the root of your FTP site create a virtual directory and give it the shortname of your domain for the alias. This can be modified if you don’t have a domain and are just using local user accounts (possibly even combined) by replacing the name of the domain with “LocalUser.” In my case though I am using domain accounts so I configured my virtual directory like this:

 

The physical path here is not likely going to be relevant, although there is no need to be reckless. I used the same physical path as my ftp site’s root.

v – Now time to make each user’s individual ftp root, no different than the step prior create a virtual directory, this time not in the root of the site but under the DOMAIN virtual directory. This time the Alias MUST be the user’s username. The physical path should be the location that you want that user to land when they first login. This doesn’t need be their home directory or any such, it can be any place of your choosing however in my case for the ease of the user I made it their home directory.

 

vi – you can quit at this point, you have a working FTP site that AD users can login to and get isolated up into their own custom home directory. If you want to take it a step further (which I did) you can nest even more virtual directories under the user’s own virtual directories that give them access to files in various locations around the network. An example might look like this:

 

This would give user4 access to an engineering folder on a file share from within his ftp home. In a sort of mystical and magical (and would only happen in a Microsoft world), the parent directory of engineering would still be user4’s home while user4 is FTP browsing.

The only part left for me was getting a web-ftp interface. I am experimenting with one I found called Monsta FTP. For the time being it is achieving the goal to some end. I need to do some branding and also troubleshoot some drag & drop features it claims to have but that isn’t working. Also in some browsers I couldn’t get it to upload at all, it does give me a starting platform though.

That’s it.

GoPro Time Lapse to YouTube done right & free

If you’ve already got your pile of jpgs to turn into a video you can skip ahead. If not, a few things to mention.

  1. You may want to take a VERY LONG time lapse, like many hours. If that’s the case you’ll almost assuredly need external power. I achieved this by modifying the case that came with my battery backpack. I am happy that I got the battery backpack even though it didn’t provide the life I needed I felt better about drilling and sawing on the back that came with it than I would have felt about destroying the original GoPro case. I used a drill to make the majority of the hole then spruced it up with my pocket knife.
  2. If you plan on overlaying a timestamp on your video, make sure that the clock is right on your GoPro.
  3. Each JPG on my GoPro used a little over 1MB on average. I’d suggest using 1.5MB to figure out how much memory you’ll need. I shot every 60 seconds for about 3.5 days and it would have overfilled an 8GB card. Thankfully I had 32GB so I was good.

On to the post-processing portion…

Create yourself a working directory to copy your photos into from the GoPro. If you have more than 1000 stills the GoPro will sort them into more than 1 directory so be sure to grab all of them from the various locations they may be in.

If when copying them you’re presented with an error saying that there are already files in the destination with the name, choose to copy but keep both files (hopefully you’re using an OS that gives you that option).

If you need to do any funky things such as turning your video 90 degrees or anything this is where you’ll want to do it, in batch processing on your JPGs. I use Irfanview for doing all of my image batch processing. One thing that you almost certainly will want to do to save time is resize your JPGs to match the size of video you want to get out of the project. Resizing the JPG will result in higher quality output than resizing the video would. For me I wanted to have 720p video so I resized all photos to be 720 pixels tall and just told Irfanview to preserve the aspect ratio in resizing. In Irfanview hit the “B” button on your keyboard, add all of your photos to the batch, choose the necessary options using the advanced button. Also… now is the time to rename all of the files in order. You can do that however you wish – if using Irfanview though the way to do it is to rename the results of your batch process and use the “sort files” dialogue, choose “by date ascending.” I also added my timestamp to my photos during batch processing, that can be done with “add overlay text” in advanced options for batch conversion in Irfanview. Note that in order for Irfanview to accomplish this with success you’ll have to have plugins installed.

Once you’re happy with post-processing you can move forward with encoding the movie. This is the part where FREE is important. I spent some time searching for a tool that was point and click, and they all appeared to be pay-for. Enter trusty old mencoder. I used to be something of an expert at using mencoder to transcode DVD rips into high quality cross-platform easily stored MP4s. I got out of that business a long time ago and as such I just did some copy/paste from some man pages to achieve what I wanted for this project. Here is the exact command ran from within the folder full of post-processed folders to create my file:

mencoder mf://*.jpg -mf w=960:h=720:fps=10:type=jpg -ovc lavc -lavcopts vcodec=mpeg4:mbd=2:mv0:trell:v4mv:cbp:last_pred=3:predia=2:dia=2:vmax_b_frames=2:vb_strategy=1:precmp=2:cmp=2:subcmp=2:preme=2:qns=2 -oac copy -o ~/ak2mn.avi

When done, you’ll end up with the file you specify at the very end of the command, in my case ak2mn.avi in my home directory. In case it’s not obvious I moved my pictures from a Windows box running Irfanview to a Linux box for the encoding. Windows will work fine also, you’ll just have to modify your command a bit to match the Windows way of storing files. Also you’ll want to modify the width and height to match the original size of your input files.

In my case the end result was uploaded to YouTube and can be see at http://youtu.be/qclzMWxgxeA.