Blogtober!

So you may laugh but I am writing a blog about blogging. Why you ask? Well because I really need to get over the fear of blogging. I for one am not a huge fan of writing, or the proper English language. I have always just done my own dialect. And it’s hard at times to get office to interpret my version of bad English or really bad English.

So why did someone that hates writing agree to do Blogtober. Well it was more to force myself in a corner. Force me into a place where I would make up excuses on why I can not do it, and really it was me not wanting to write. I mean really that was the reason I created a blog was to force myself to look at it and say you need to get back at this. For me to get anywhere I need to feel uncomfortable and let it go.

One of the hard things for me is coming up with topic that I feel are interesting. I come across the things I am doing at the office and cant really blog about some of them. But I feel everyone is doing this. So its not that interesting. Were in turn you are doing things just a bit different. Maybe enough to help just the one person. To me that makes it worth it. So advice for people that are out there reading this. Get out there get uncomfortable and write down what you are doing. For me this started off with the idea of dumping my Notebooks. But them it kind of changed. I still need to dump my notebooks (Note to self). But I started taking the reviewer idea and opinion. Not sure what its going to evolve, but honestly, I like what its kind of become. Just all over the place with nothing as a niche.

I know this is a short one but its just a start. 🙂 Time to start the real fun. Next blog after Atlanta VMUG UserCon. Setting deadline for myself there.

Advertisements
Posted in Random Crap | Leave a comment

VMworld Announcement of AppDefense

I was completely immersed when they announced AppDefense. I had to dig and find out some more. This is an over view of what I have found thus far.

AppDefense is designed to protect your virtual and cloud based applications. Traditional security is Network related. In the past VMware infrastructure, you had to run NSX to do micro segmentation and then use its Guest Introspection driver to use third party security tools to do Anti-Malware, Intrusion Prevention, Web Reputation, Log Inspection, and Integrity Monitoring. Don’t get me wrong this works well. But it only works after the fact. And its not a holistic approach. There are gaps missing. From where NSX gave you the Micro segmentation or leased privileged execution network. now AppDefense give you the ability to deploy the same leased privileged model to the application. This is where AppDefense fits in. AppDefense is a (SaaS) offering from VMware AWS. It can either be deployed in conjunction with NSX, or on its own. I would highly suggest you install side by side as you can not stop end users from clicking on malicious emails.

AppDefense is meant to take a 3 phase approach. Capture, Detect and Respond.

The capture process in AppDefense is brought to use through a engine that Looks directly into vCenter to find out the inventor. It also ties into your Provisioning systems, it also learns from the ESX host on what the VM is doing. Its starts learning the applications and what they do. At this point you have a general understanding of what the application is with some good data points. But this is not it. This is where the great part is. When you set a application owner you can have the security team ask specifics of what an application is doing and why from the metric that are being observed. This decreases the time to market for application and a true sense of what the application is doing.

The Detect in AppDefense uses a protected layer that uses the hosts, monitoring points. It compares what the applications are made of and what they do and need to talk to, to know manifests.  It waits for a change in the operating system, processes, how process talk to each other, how one VM needs to talk to another VM.

The Respond part in AppDefense is the true power of the system. This is where you can automate the response from a detection. You can use ESX to suspend a VM, revert from snapshot. With NSX you can do the fun stuff, like blocking network traffic, run Packet Capture, use the Guest Introspection and leveraging 3rd party tools like Deep Security Manager from Trend Micro to kick off scans, or Log Analysis, quarantine and once found clean throw the machine back into production.

During a session, I could see this somewhat work in a “Live” demo. And to me that sold me on this product. They could stop an application exploit that uses the Applications standard operations. For instance, uses a exploit on a web server using 443 to gain remote shell access on port 443. Doing normal port 443 actions the normal firewall or IDS systems would not have caught this behavior unless they had some analytics of what the applications were doing. And most systems today do not necessarily do this. I may be a bit naive on some of the other products out there but for VMware to step into this market and bring the Security approach to the application layer is a great move by VMware.

It might be a bit of a hurdle to get the Security teams of the world to use it but I think once they see the value of what it has to offer and what the time savings it will have with putting a application to market I think all will be happy.

I am not sure on what the cost is yet on this product but would be interested to find out. I have heard rumors that it may be a per CPU cost. If that is the case it can become expensive quick. So be on the look out for more coming down the pipe on this in the future as more documentation is release.

VMware launches long-anticipated AppDefense - Image

 

More Info Please check out the following.

https://blog.cloud.vmware.com/s/content/a1y6A000000e6lUQAQ/article-vmware-launches-appdefense

https://www.vmware.com/products/appdefense.html

 

 

Posted in Reviews, Security, Virtulization, VMworld | Tagged , ,

The Dirty Taboo topic of IT Debt! The Micro Series…. Maybe if I don’t see a Squirrel!

Why is it no one wants to talk about IT debt? Are they scared of saying hey I need help? Do they not have a clue that they are in debt? Maybe they feel like its job security? Maybe they are the ones putting you into debt?

Over the years I managed to put myself in IT debt, and try to start digging myself out of debt. But what is debt really. The best explanation I could find was from Gartner:

IT Debt is described as “applying a quantifiable measurement to the backlog of incomplete or yet-to-be-stated IT change projects.”

IT debt is no more than the projects that was cut short due to someone saying it works hurry up move on to the next thing. We will go back and fix that later. And little things that are miss configured for testing or troubleshooting also play a toll into this.  Over the years we have all skipped a corner knowing we will all go back and fix it later and here it is two years down the road and still have not done so. I’m not to proud to admit I have done the same thing. We all have if you dig deep. The question did you learn from doing this? I know I did, my shortcut turned out to be a land mine waiting for me to step on it during a upgrade. And that little hour short cut cost twenty plus hours of work to clean up and fix. So why now talk about this. To me it seems like this is what is holding the world back.

 

So, what are the causes of IT debt? If you ask people its due to Budget Cuts.fe2745f64f5f29d7609059e047e687dd Gartner likes to blame it on IT Budgets but I am not a huge fan of that statement. Is it budget cuts? Or is it pure laziness? Or is it being rushed? Or is it ignorance? Or not talking ownership in your service? It believes it’s a mixture of All with Budget Cuts as the start of the chain. I see it working like so:

A project is put together with a projected cost. Things are submitted to a group of people that have no clue about IT and they say, do this for %20 less money and we will sign off. We say okay we will make it happen. Pause Here and think about this!

q514ygWe take this plan back to the team and say here is what we need to get this done, and the timeline of when it needs to happen. The team sets out to acquire what is needed and start setting up what is needed. As the time line and budget is growing to exhaustion, we start cutting corners. Just doing the bare minimum to make it work. And we may skip a page in the install and best practices guides thinking we may know better. Pause Here and think about this!

yoda-hurry-up-you-must

Now we are in the last days of the project we have our manager hounding us to hurry up finish this project. We have other things that we need to get done. Under pressure and reluctantly we say well its ready for production. When really, we know it’s not ready and have another week’s worth of work and tweaking and testing to be done.  Pause Here and think about this!

 

So, what happened in this situation?

First breakout, they start of the chain was the people of the board approving the projects saying we approve only 80% of your cost. Well where does that savings come from? It comes from labor to do the project. Not so much the cost of the asset, as you can only get so much of a discount. This project was doomed from the beginning. Then as good IT people we said yes, we will make this happen, this is another flaw in the chain. Sometimes you need to stand up and say no we can’t do it for this kind of cash. If you are presenting this to the board for approval its your job to keep the IT operations afloat. And running this model will only cause it to sink.

Second Breakout, cutting corners to just make it work. This ends up killing everyone in the long run. Not to mention the ignorance/laziness to properly do the task at hand. This in turn creates a huge amount of what I like to call Land Mines. A little lost change or forgot setting that comes back and blows up in your face at the most in opportune time.

Third Breakout, this is where it all comes together. This is where the pressure from above comes down hard and making sure this project is in budget and on the correct timeline. And us as fearing that there will be reproductions for missing the deadline or going over budget we say yes, we are on time and on budget. But would it really be our fault. We gave them the original numbers to make this happened and the board cut it, and the manager agreed to it. So to me I think it’s the managers fault. Why are we scared? Well it’s the old philosophy that “Shit rolls down Hill” and the stupid blame game. Now it is 100% our fault for saying this project is production ready. As in fact we know that is not. We have knowingly cut corners, not optimized, not properly tested, and so on. This is all signs that the team doing the install wants to take no ownership in it.

 

After this project where does this put us? This gives us a new thing, that works, but is full of so many flaws that is prone to break down. images (1).jpg bucket with holesSo not only did you create X amount of back log due to the short cuts your team took, but now you must fight fires on top of that, and also keep the lights on for the rest of the environment. In turn digging the IT debt hole deeper. P1030026Just to throw a number on it. Really that 20% cost savings the board thought was a great idea has cost them 5 times that. The difference is that they don’t see that cost. No one brings those numbers to them. We do not bring the numbers to them correlating that the 20% they tried to save cost us x dollars this month.

Good news we all start like the pic above. And end up like this…

tumblr_mhk91im5tb1qdj4tho1_r1_500

So how do we solve all this brokenness? I vote Nuke it and start over! To be continued in the next post.

 

Posted in Random Crap

How does all this IT infrastructure stuff actually work?

So looking at this from a high level a day to day job of a admin is basically to keep the lights on and things moving forward. We put in a huge amount of time and effort in making sure what we are doing is done to a level we expect. But for some reason the projects always seem to get blurred, or rushed along. This leads to sub-par work and maybe some check boxes left un-ticked and so on. Eventfully we catch these or go back and fix them later. But there is a landmine there waiting to catch someone because we as admins got distracted. We have all ran across IT landmines some of our own doing and some of our predecessors. It’s always enviable to run across these.

Over the last month or so I have had some time to think and reflect on what is going on. And as I sit and look at what a good portion of my work day is wrapped around I realized its amazing how anything actually works in IT. Think about it…..  and hear me out on this.

When building a Rube Goldberg Machine, you pick what you want to accomplish and what your thyme will be, and what materials you will use. I think this would be the key starting place with this. Then you start to layout the design roughly, research and figure the physics of said design. You spend countless hours researching and documenting what 2016_0914_125032 Panorama_kleinthe plan is. Then it becomes time to build. This is your time to shine with all the research you do and you think you can take on the world. You start laying out the pieces, start putting together the individual parts, have one team work on one section you work on another, another team work on another part and so on. Your teams start testing individual parts of the contraction and finds flaws and adopts as necessary. This is where your design and actual end product begin to differ. Things keep changing, tests keep failing and succeeding. Then finally you get a fully tested individual parts and segments. Its ready for the big moment. Let’s run this thing. This is where all this time and effort is lead up to. You start the first action in the chain in seemingly never-ending events. Things go smooth till the one part. And it fails. You assess the situation and reset make some modifications and go again. This continues in a repetitive motion till you finally get it to succeed all the way through.

o-GOOGLE-DATA-CENTERS-facebookNow compare this story to building up a new Data Center. You build your business requirements, your design, and your hardware choices. And begin formulating the complete build design. You research every best practice documents, check every driver version to make sure its supported and on and on. You order your hardware, wait for it to show. When it does is when the real work begins. You start building out your new data center. One team does storage, one team does the network, one team does the compute and so on. You start testing find some issues make some changes to correct the errors, and continue till each team is satisfied with there part. Then you start to test the whole system. Things fail and you make more changes, and repeat this process a few times.

If you look the things are vastly different on what they are doing, but extremely similar in the build and design process.

So, take your Rube Goldberg Machine that you built. Think of running that same machine billions of times a day on repeat. One single slight blow of wind, or a shake in the floor could through the contraption off and it would fail. In IT you build redundant systems, but that’s really like building two of these things, and trying to keep it running all day every day, 365 days a year. All it takes is one thing that is not correct from a malformed packet, to a wrong SQL query and they whole contraction could go up in flames.These-11-Rube-Goldberg-Machines-Will-Inspire-You-to-Create-Something-Awesome

Also let’s not forget about all the changes you made from the original design. Now sit back and think. Did you document every change? Did you test every setting? Did you…. landmine-imageAnd so on. I am sure there are a few changes in there that were not documented. These are what I like to call Land Mines. These are the little settings that you forgot you changed for some reason or another. Maybe out of frustration because some process was taking to long, like disabling user acknowledgements, or disabling spanning tree. Right now, they might not make a difference, but in the future, they make someone’s day a nightmare.

Think about it a year down the road. And you go to plug in a device to a switch for a redundant link, and all the sudden you create a switching loop. Or you go to do a firmware update and upload it, and then all the sudden all your servers reboot at once. All because you changed one setting a year ago. These are what you call landmines. Lying and waiting for someone to step on them.explosion4_mini

So here lately I have come with the conclusion that IT infrastructure is a Rube Goldberg Machine built on a field of land mines. We are all just waiting on one thing to go wrong and make another change and hope we did not alter the path of something and step on a landmine in the process.

So what is the moral of this story…I really don’t know just thought I should write about it. Maybe Document your changes no matter how small. It could come back and bite you or someone else later down the road. and Make your you understand the full ramifications of your changes before you make them. As it may alter the path of something a few steps down the process chain, in-turn blowing everything up.

Posted in Opinion, Random Crap | Tagged ,

Quick AppVolumes 2.11 Security Hardening!

Today companys are growing more and more security aware and paying more attention to the fine details, there is a growing demand for hardening applications. On that journey i had noticed that AppVolumes web console uses HTTP and HTTPs for authentication. So if you choose you can log into AppVolumes manager on HTTP. There is no little check box or setting to change to remove this option. But i did find a great KB article for that:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2095972

This article explains how to turn off port 80 for the AppVolumes Manager.

The article below is linked inside this KB on how to move your clients from using 80 to 443.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2091589

And the last one that is also referenced is how to replace the default SSL cert and generate a CSR.

https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2095969

This was more or less just a quick blog post to let people know there is a solution out there for this.

Posted in VDI, Virtulization | Tagged ,

My thoughts on Surface Pro 4

After been given some time with the Surface, I have got a chance to know it a bit. First off I would like to say thank you to those of you that convinced me that I should migrate to the Surface world. This thing is great. It’s a perfect format for what most people need. Do you need a big 15 laptop monitor, no not really! One thing I miss from a 15 in laptop is the 10 key. But I can manage and I am sure I will learn to deal with out it just fine.

I managed to go out and get myself a i7 Surface with 16GB of Ram, and 256GB SSD, with type cover and a Docking station. The reason I get the extra ram was to run VM’s from it. I figured it would be a missed option if I could not just spin up a Test VM in workstation. For this option I did buy a WD external 1TB hard drive to store the VM’s on.mypassport_ultra_fd_5 There was no reason to spend the astronomical amount they want for the 1TB internal SSD. The External spinning disk works great for this. This way it gives me some flexibility to do what I want with it. The goal of buying a surface for me was twofold. The big one was to replace a 15in Laptop, and an IPad Air. And the next was to act as a desktop for my house. I wanted to replace an older desktop that I had and give me some mobility. I already have a 1TB OneDrive account (Not so unlimited anymore) so I can use that to store some docs, but I also have a Synology NAS at the house.

Personal dislikes:

I guess the first and biggest dislikes I have is the Pen needs to have a locater beacon in it. Kind of like what Tile did with their product they need to do the same thing to the Pen. I seem to leave mine everywhere, not because I mean to, but because it falls off. The magnet concept is a good idea but did anyone test this. Mine seems to fall off during transit, in my backpack, when I bump the Surface, even if you seem to stare at it pretending you can use the force to knock off the Pen it just falls off.

The second biggest one is the fan and heat. I went with the i7 version and I have to say that this thing is anything but quiet. It seems like every time it comes on the fan is running, and seems to have some warmth to the back of the Surface. I don’t mind so much if the fan would come on from time to time but it seems to run non-stop. I have not put much research into it to see if there is a bug issue, but it’s really annoying to say the least.

The third one is the Docking station “Thing”. It’s a nice option and I was glad to get one. I was looking forward to taking it home and hooking up my dual 29in monitors to it and hooking up my surface and be all happy. Well that did not work out so well. I hooked up the doc station and nothing happened, and then after a pause all 3 screens went black. I am thinking “Well I broke it”, but after a few minutes the surface display came back on and I could set up the other two monitors and move them around. I was happy for the time being. It was kind of nice but clunky to get working just right. I unhook the surface go work the next day and come back home and hook it up again and black screens, you don’t even see the Windows Hello working to sign you in, and you just hear the sound. After about 2 to 3 minutes all 3 screens come to life. And it is like that every time you plug it into the dock. I have not tried just one external screen yet, but plan on it now that I thought of it. I want to replace my dual screens with one Ultra-wide anyways so if one screen works well I am good with that.

Benefits:

Flexibility and mobility are the two biggest things I have seen. The surface is so much lighter and portable than my 15 Laptop. Not to mention the battery seems to last.

The Pen, okay the first day the pen was a novelty. I mean for any of you that have has ever had the torture of reading my handwriting you would understand. I really am one of the people that missed my calling and went into IT instead of the medical field. So writing in onenote without my decoder ring is kind of an issue. It was fun and was a great idea for people with neat handwriting, But not for me. Day two is when I figured there were other uses for the Pen. I was trying to explain something to someone and I had no real paper to draw a diagram on, and use the surface to draw on. And that is where I found my use for the pen. When you are making note for projects or network diagrams you can just draw them into OneNote with the pen and you will always have them. Now the pen had moved from a novelty to a useful tool. I was hooked at this point.

The new touchpad and keys on the type cover are nice. The touchpad is smooth to operate, and the keys feel great. One downfall is if you have the Type cover propped up there is the odd thud sound when you type on it. And it’s a little bouncy, but that I can deal with. I would say that buying an external mouse was a good idea when using it for long periods. I decided to go with a Logitech MX Anywhere 2. mx-anywhere-2I had such a great life with the original, that I thought why not upgrade to the newest version and give it a whirl.

I guess the overall Surface itself. So far the design is great, the feel is great. It’s been a great tool to add and will be looking forward to the next version and see how this market grows.

Posted in Random Crap, Reviews | Tagged , , , , , ,

Building a new Media Server with Plex

After moving into a new house this last May I have been on a major upgrade path, mainly because I went from dual satellite links at 10mb down and 1mb down, with a 25GB cap on each link to now having 45MB. ulhotnmced87tf1qeruzThis allows more internet activity at my house now that I have 45mb and no data cap. I made the decision to upgrade my old media server. This allows us to stream our movies to any device anywhere. So when the kids go somewhere they can watch our videos on their tablets or phones. The old server was a custom made rig to say the least. I made two identical boxes, each with 11 one TB drives in them, one box was the master and the other was the slave, set to mirror off an ESATA cable between the two of them. This was originally loaded with Windows Vista media center, and later upgraded to Windows 7 media center. I had spent days converting DVD’s to WMV files, and did not want to lose this so redundancy was a must on this original build. Over the years this has worked amazingly well, with minor hard drive failures here and there, but nothing major. Beings it’s been in production for 7+ years I figured it’s time for an upgrade.

A few months ago I started the project of ripping DVD’s and Blu-Rays again. This time I was ripping them to MKV files. I did this because most media today can play MKV in its raw format and not need to do any conversion or transcoding. I started loading these on portable drives and set up a temporary Plex server on an old PC. I loaded Ubuntu 14 due to the PC was only a Core Duo pc and I knew that windows 7 would chew up most of the CPU on that machine and UbuntuUbuntu_GDM_logo_alternative is a lot less CPU intensive of a OS. Not to mention that the box only has 3Gb of RAM.  I wanted to give Plex a fair shot at a usable replacement for media center. I mapped the portable drives and started streaming media. (How to map external drives to Plex) I decided to let it run for a couple of weeks and plex-logo-dark-small-77202045f47146129647bee8b1cac77cchecking in on it from time to time. It seemed like an awesome replacement. At one time I had 4 live streams running at the same time on a Core Duo machine. That is pretty good I have to say. Even one of the streams was a 1080p and the other 3 were 720.

Now it’s the build phase. First step, find or build a storage device. I have been looking at different options for this for some time. It came across a friend that was running an Intel NUC as a Plex server with a Synology as his storage. To me that just seemed a little too costly. After all I was looking at building a 20tb+ media library. I ran across an ASRock X99 Fatal1ty X99X Killer(L1)motherboard that could support 12 hard drives. And I thought about creating a 12 drive hot swap super Plex server.81vtfkOB8HL._SL1500_ One issue I had was a case. I found one with Rosewill. They
make a 12 drive 4u hot swap rack mount chassis. After doing all the math of the cost of building my own and just topbuying a Synology I decided to go that route. The cost on both builds with a media server was about the same. So why not go the proven method.
I purchased a Synology DS1815+ and 6 Western Digital 4 TB red drives. This is my start. After about 2 weeks I ordered thetop remaining 2 drives and 1 cold spare. This will give me just under 24tb of media storage in SH2 or Synology RAID 6 and have one drive as a cold spare. The ability to have two drives fail is nice but beings rebuild times are so high I wanted to keep an extra drive close by as my security blanket.

On building the server that is still a work in process. I am still working on the best solution for this. Beings conversions on my side of the server are minimal, and should create minimal tax on the CPU there is not much of a need of a high end CPU. But I want to plan for future proofing as much as possible. I know an i7 is a must but the big deal I have is do I go with an Intel NUC that gives you a dual core i7 and a single NIC. Shuttle PC has a dual ds81_04NIC version of the NUC just it only supports a Gen4 i7 but it can be a Quad core Gen4 i7. Right now I may go this direction mainly to get the Quad core part so I can support more HD video streams.  There is always is many different ways to solve this, just really depends on your needs. Right now I would like to go with a  Quad Core i7, 8GB of Ram and 256GB SSD running Windows 10 and the Database backed up to the Synology. Just have to make up my mind on the platform.

One big thing to consider is your Plex server needs. How big your repository will be and what are the max number of continuous 1080p streams you will have. For CPU recommendations look here.

Finally I came comfortable with a new platform for the Plex server. I decided to go with the Shuttle PC DS81. This was from what I could tell the best option for me. I decided to go with the Intel i7 4790 processor for the Quad Core 3.6GHz power behind the beast. I chose this processor based on benchmark tests. I tried to pick the best processor for the best price. And this one fit my needs the best. I also decided to use a Samsung mSata EVO 250GB SSD. I had never seen the mSata drive in person till this one. I was quite impressed on how small they are. Then topped it all off with 16GB of Crucial DDR3 PC3-12800 Unbuffered NON-ECC. After a bit of assembly, we were ready to start loading Windows 10.mSata I decided to go with Windows 10 for now because of easy of use. When you go to connect up your External drive to load the OS you need to change one setting. in order for it the recognize your mSata HD. If you open up the BIOS and go to advanced you will see a setting “Mini-PCIE/mSATA Select” Change that to mSata and it will now see your HD. Once I had 10 loaded I ran the driver disk included with the Shuttle PC to get the NIC drivers to load. Then we were off and updating. Eventually loaded the Plex server and now we can let the streaming begin. I did make a few changes to the Plex server
MakeItHurtnow that I have a ton resources behind it. I did change the Transcoder quality to “Make my CPU hurt” I figured why not, we might as well go all out on the testing of this.

After running this for a few hours last night I am so far quite pleased. I did manage to run 6 1080p Streams off it at once and only had a 35% CPU utilization. Hopefully this weekend I can load it down with my Max planned load of 12 Streams and see how it goes. I will keep this updated as the testing progresses.

Below are some of the Assembly Pictures of the Synology and the Shuttle PC.

Synology 1815+

Synology 1815+

Synology 1815+

Synology 1815+ Rear

Synology 1815+ Rear

Synology 1815+ Ram Slot

Synology 1815+ Ram Slot

Synology 1815+ Ram Inserted

Synology 1815+ Ram Inserted

Shuttle PC build

Shuttle PC DS81, Intel i7 4970, Samsung SSD mSata 850 250GB,  Crucial 16GB of RAM

Shuttle PC DS81, Intel i7 4970, Samsung SSD mSata 850 250GB, Crucial 16GB of RAM

Shuttle PC DS81

Shuttle PC DS81

Shuttle PC DS81Rear

Shuttle PC DS81Rear

Shuttle PC DS81 Inside

Shuttle PC DS81 Inside

Shuttle PC DS81Heatsink

Shuttle PC DS81Heatsink

Shuttle PC DS81 Heatsink Side

Shuttle PC DS81 Heatsink Side

Shuttle PC DS81SystemBoard

Shuttle PC DS81SystemBoard

Intel i7 4970 Installed

Intel i7 4970 Installed

Samsung 850 mSata 250GB SSD

Samsung 850 mSata 250GB SSD

Shuttle PC DS81 Assembled

Shuttle PC DS81 Assembled

Shuttle PC DS81 Assembled 2

Shuttle PC DS81 Assembled with additional HD cradle installed.

Posted in Media Server | Tagged , , , , , , , , , , , , , , , ,