December 14, 2011 spout tools data

Moving My Data To The Cloud: Stormy Weather

Moving My Data To The Cloud: Stormy Weather

For years, I’ve maintained a single main” computer. It was the computer that was the central authority of all of the personal data I’d accumulated over the years and from which it made me uncomfortable to be separated. Because I needed a single computer for everything, it had to work on my couch, on a plane, on a desk and everywhere else I ever needed to go. Also, it couldn’t have a giant monitor or multiple monitors, because it had to go everywhere. All of this was because I needed all of my data with me all of the time.

My process for moving to a new computer used to include a lot of manual copying of files from the old D hard drive (D is for Data) to my new hard drive, which was also carefully partitioned into C for Windows, Office, Visual Studio, etc. and D for a lifetime of books and articles, coding projects and utilities I’ve collected over the years, e.g. LinqPad, Reflector, WinMerge, etc. This is 30GB of stuff I wanted access to at all times. I was also backing up via Windows Home Server, keeping photos and music on the WHS box (another 30GB), then backing that up to the cloud via KeepVault. And finally, as I upgraded HDs to go bigger or go to solid state, I kept each old HD around as another redundant backup.

All of that gave me some confidence that I was actually keeping my data safe right up until my Windows Home Server crashed the system HD and I found out that the redundancy of WHS doesn’t quite work the way you’d like (this was before I installed KeepVault). This was a first generation HP Home Server box and when it went down, I took it apart so I could attach a monitor, keyboard and mouse to diagnose it, pulled the HDs out so I could read what files I could and ultimately had to drop it off in Redmond with the WHS team so I could get it up and running again.

There are some files I never got back.

KeepVault gave me back some of the confidence I’d had before WHS crashed, but they didn’t provide me a way to see what files they were backing up, so I didn’t have the transparency I wanted to be confident. Further, they don’t have clients on every kind of platform like Dropbox does.

Of course, simply sync’ing files isn’t enough — sync’ing my 10GB Outlook PST file every time I got a new email was not a good way to share 20 years of contacts, email and calendar items.

The trick is to sync each kind of data in the right way, be confident that it’s safe and have access to it across the various platforms I use: Windows, Windows Phone 7, iOS and possibly Android (you know, if I feel like walking on the wild side!). And since I’m currently under employed (my new gig doesn’t start till the new year), I figured I’d do it once and do it right. I almost got there.

Files

Let’s start easy: files. Dropbox has made this a no-brainer. You install the software on any platform you care to use, drop everything you want into the folder and it just works, keeping files in sync on the cloud and across platforms, giving you adequate (although not great) status as it does so. Most platforms are supported natively, but even on platforms that aren’t, there are often alternative clients, e.g. I’m using Boxfiles for Windows Phone 7. When I gave up my Microsoft laptop, instead of doing the dance of the copy fairy to my new Mac Book Air, I installed Dropbox on both computers, dropped everything I want backed up and sync’d between computers into the Dropbox folder. 36 hours and 30GB later, all of it was copied into the cloud and onto my new laptop, at which point I reformatted my Microsoft laptop and handed it into my boss.

Further, as a replacement for WHS and KeepVault, I now keep all of the files that I was keeping just on my WHS server — photos and music primarily — into Dropbox.

image

This keeps me the confidence I need to know that my files are safe and backed up to the cloud, while making it very easy to keep it backed up locally by simply running Dropbox on more than one computer at my house. If at any time, I don’t want those files on any one computer, I tell Dropbox to stop sync’ing those folders, delete the local cache and I’m all done.

There are two tricks that I used to really make Dropbox sing for me. The first is to change my life: I no longer partition my HDs into C and D. The reason I’d always done that was so that I could repave my C with a fresh Windows, Office and VS install every six months w/o having to recopy all my data. Windows 7 makes this largely unnecessary anyway (bit rot is way down on Win7), but now it doesn’t matter — I can blow any computer away at will now, knowing that Dropbox has my back. In fact, Dropbox is my new D drive, but it’s better than that because it’s dynamic. The C drive is my one pool of space instead of having to guess ahead of time how to split the space between C and D.

The other thing I did was embrace my previous life: I wanted to keep D:\ at my fingertips as my logical Data” drive. Luckily, Windows provides the subst” command to do just that. Further, ntwind software provides the fabulous VSubst utility to do the mapping and keep it between reboots:

image

Now, I’ve got all the convenience of a dedicated data” drive backed up to the cloud and sync’d between computers. Because I needed 60GB to start, I’m paying $200/year to Dropbox for their 100GB plan. This is more expensive than I’d like, but worth it to me for the data I’m storing.

There is a hitch in this story, however. Right now on Dropbox, data and metadata is available to Dropbox employees and therefore to anyone that hacks Dropbox (like the government). I don’t like that and for my very most sensitive data, I keep it off of Dropbox. When Dropbox employees themselves aren’t able to read Dropbox data or metadata, then I’ll move the sensitive data there, too.

Music

I’m not actually very happy with how I’m storing music. I can play all my music on any PC, but I can only play it one song at a time on my WP7 because there’s no Dropbox music client. I could use the Amazon cloud drive that provides unlimited music storage for $20/year, but there’s no WP7 client for that, either. Or I could spend $100/year on Amazon and get my 100GB of storage, but their client isn’t as widely available as Dropbox. Ironically, Dropbox is using Amazon as their backend, so hopefully increased pressure in this space will drop Dropbox’s prices over time.

Photos

I’m not using Facebook or Flicr for my photos simply because I’m lazy. It’s very easy to copy a bunch of files into Dropbox and have the sync’ing just happen. I don’t want to futz with the Facebook and Flickr web interfaces for 15GB worth of photos. Right now, this is the digital equivalent of a shoebox full of 8x10s, but at least I’ve got it all if the house burns down.

imageNotes and Tasklist

For general, freeform notes, I moved away from Evernote when they took the search hotkey away on the Windows client (no Ctrl+F? really?) and went to OneNote. The web client sucks, but it’s better than nothing and the Windows and WP7 clients rock. I have a few notes pinned to my WP7 home screen that I use for groceries, tasks, etc., and I have all of my favorite recipes in there, too, along with my relatives’ wi-fi passwords that they don’t remember themselves, a recording of my son snoring, etc. It’s a fabulous way to keep track of random data across platforms.

On the task list side, I only sorta use OneNote for that. I also send myself emails and write little TODO.txt files every time I get a little bee in my bonnet. I’ve never found that the Exchange tasks sync well enough between platforms to invest in them. Maybe someday.

imageMail, Contacts and Calendar

And speaking of Exchange, that’s a piece of software that Microsoft spoiled me on thoroughly. This is sync that works very well for contacts, emails and calendar items. IMAP does email folders, but server implementations are spotty. For years, I used Exchange for my personal contacts and calendar, only keeping my personal email separate in a giant PST file, pulling it down via POP3. This can sorta be made to work, but what I really wanted was hosted Exchange.

However, what I found cost between $5 and $11 a month per user. I’d probably have gone with Office 365 for sellsbrothers.com mail, even at $5/month except for two reasons. The first is that Microsoft requires you to move your entire DNS record to them, not just the MX record, which means there is all kinds of hassle getting sellsbrothers.com working again. They do this so that they can get all of the DNS records working easily for Lync, Sharepoint, etc., but I don’t want those things, so it’s just a PITA for me. If they change this, I’d probably move except for the other problem: I’m not the only user on sellsbrothers.com.

For years to be the big shot at family gatherings, I’ve been offering up permanent, free email addresses on my domain. That’s all well and good, but now to maintain my geek cred, I need to keep my mom, my step-mom, my brother, my sons, etc., in an email server that works and one that they don’t have to pay for. So, while I was willing to pay $5/month for hosted exchange for me, I wasn’t willing to pay it for my relatives, too!

One option I tried was asking securewebs.com (my rocking ISP!) to upgrade to SmarterMail 8.x, but that didn’t work. I even footed the one-time fee of $200 for the ActiveSync support for SmarterMail, but I couldn’t make that sync from Outlook on the desktop or the phone either.

Eventually I made an imperfect solution work: Hotmail. The nice thing about Hotmail is that it’s free for 25GB (yay webmail storage wars!) and it syncs contacts, mail and calendar items just like I want. Further, with some effort (vague error messages are not useful!), I was able to get Hotmail to pull in my personal email. And, after installing the Outlook Hotmail Connector (explicitly necessary because my Windows Live ID is not a @live.com or an @hotmail.com email address), I was able to sync almost everything, including the folders I copied from my giant PST file, via hotmail to both my desktop and phone Outlook. However, there are a few downsides:

  • There is an intrinsic delay between when someone sends me an email and when it syncs to any device because Hotmail is polling via POP3. This polling is annoying and sometimes sends me directly to the web mail frontend where I can interact with my personal email directly.
  • The Outlook Hotmail Connector sync’ing progress indication is terrible in that it seems to stack every time I press F9 (a bad habit from years of POP3 usage) and I can’t tell what it’s working or or when it will finish. Because of this, I’ve trimmed the set of email folders I sync to the ones I really use, using the PST file as an archive for days gone by.
  • Hotmail does the right thing with the Reply To”, but sometimes weird @hotmail addresses with random characters shows up in email threads, which breaks the fourth wall. That’s annoying.
  • My RSS Folders don’t sync to my phone, which is a shame because I really loved having my Hacker News folder pinned to my WP7 home page letting me know where there were new items. None of the RSS readers on WP7 seem to work as well as a simple pinned email folder.

The good news is that this all works for free and my relatives continue to have working email. The bad news is that it doesn’t work nearly as well as the Exchange server I’m used to. Hopefully I will be able to revisit this in the future and get it working correctly.

imagePC Games

I purchase all of my games via Steam now and install them as the mood strikes me. I love being able to reinstall Half-Life 2 or Portal on demand, then blow it away again when I need the hard drive space. Steam is the only viable app store for Windows right now, although I am looking forward to have the Microsoft app store in Windows 8.

imageBackups

I no longer maintain backups” in the sense that I can slap in a new HD, boot from a USB stick and have my computer restored in 30 minutes or less (that never worked between WHS and Dell laptops anyway). I’ve had HD problems, of course, but they’re so rare that I no longer care about that scenario. Instead, what I do is keep all of the software that I normally install on a file server (the new job of my WHS box). If the file server goes down, then most of the software I install, i.e. Windows 7, Office and Visual Studio, is available for download via an MSDN Subscription. The rest is easily available from the internet (including Telerik tools and controls!) and I just install it as I need it.

Where Are We?

In order to free myself from any specific PC, I needed to pick a new centralized authority for my data: the cloud. The experience I was after for my PCs was the same one I already have on my phone — if I lose it, I can easily buy a new one, install the apps on demand and connect to the data I already had in Exchange, Hotmail, Skydrive, etc. Now that I’ve moved the rest of my world to Dropbox, I can treat my PCs and tablets like phones, i.e. easily replaceable. It’s not a perfect experience yet, but it’s leaps and bounds ahead of where it was even a few years ago.

Hardware and software comes and goes; data is forever.

December 13, 2011 spout telerik

Goodbye Microsoft, Hello Telerik!

Goodbye Microsoft, Hello Telerik!

I have gotten to do a ton of really great things at Microsoft:

  • I got to write a column on WPF and turn that column into not one, but two books.
  • I got the excitement for every blog post in the first two years wondering if this was the one that was going to get me fired. (It was close a few times.)
  • I got to throw several Developer Conferences (DevCons).
  • I got to spin up a completely new community from scratch (“Oslo”).
  • I got to stay up all night erasing the word WinFS” from all of microsoft.com.
  • I got to be part of a Microsoft product team from incubation through startup to product and then to kaput.
  • I got to get ordained as a minister so that I could marry a PM from the WPF team to a PM on the WCF team as part of the talk I gave with Doug Purdy at the 2008 PDC.
  • I got to prepare for that talk with Doug until 4am, then walk back to the hotel, causing people to cross the street to stay away from us. And then I got to give that talk with Doug the next morning right after restoring my copy of Windows that had crashed 30 minutes before.
  • I got to drag Lars Wilhelmsen up on stage to read Norwegian from the Oslo Tour Guide book, only to find I was pointing him at German.
  • I got to throw an SDR.
  • I got to play poker with Microsoft power brokers far above my level (and take their money : ).
  • I got to sleep at Don Box’s house and become an adjunct part of his family.
  • I got to have two design reviews with Bill Gates (as hard as I tried, I could never see him actually enter the room).
  • I got to turn developer feedback into hundreds of bugs across dozens of products.
  • I got code into Vista (and I assume into Windows 7 and Windows 8 as well).
  • I got to work on the team that built the most ambitious set of templates ever shipped with Visual Studio.
  • I got a very quick, very deep education on JavaScript and CSS.
  • I got to help drive the developer story for an entirely new platform: WinRT, WinJS and Win8.
  • I got to lead two product teams through two PDCs (OK, one PDC and one //build/).
  • I got to give the //build/ keynote launching the Visual Studio 11 tools for Windows 8 with Kieran Mockford, who will forever be my //build/ buddy.
  • I got to see how the sausage is made for SQL Server, WCF, WPF, Silverlight, Windows Phone 7, Windows 8 and a host of others. I am forever changed.

Those and dozens more have all been extraordinary experiences that have made my time at Microsoft extremely valuable. But, like all good things, that time has come to an end.

telerikLogo-web-450x180pxAnd now I’m very much looking forward to my new job at Telerik!

Telerik is an award-winning developer tools, UI controls and content management tools company. They’re well-known in the community not only for their top-notch tools and controls, but also for their sponsorship of community events and their free and open source projects. Telerik is a company that cares about making developer’s lives better and I’m honored that they chose me as part of their management overhead. : )

My division will be responsible for a number of UI control sets — including WinForms, WPF, Silverlight and ASP.NET — as well as a number of tools — including the Just line, OpenAccess ORM and Telerik Reporting. I’m already familiar with Telerik’s famous controls and am now ramping up on the tools (I have been coding with JustCode recently and I like it). My team is responsible for making sure that developers can make the most of existing platforms, knowing that when you’re ready for the next platform, we’ll be there ready for you.

These controls are already great (as is the customer support — holy cow!), so it’ll be my job to help figure out how we should think about new platforms (like Windows 8) and about new directions.

And if you’ve read this far, I’m going to ask for your help.

I’m going to be speaking at user groups and conferences and blogging and in general interacting with the community at lot more than I’ve gotten to do over the last 12 months. As I do that, please let me know what you like about Telerik’s products and what you don’t like, what we should do more of and what new things we should be doing. Telerik already has forums, online customer support, blog posts and voting — you should keep using those. In addition:

Feel free to reach out to me directly about Telerik products.

Of course, I can’t guarantee that I’ll take every idea, but I can guarantee that I’ll consider every one of them that I think will improve the developer experience. I got some really good advice when I first arrived at Microsoft: Make sure that you have an agenda.” The idea is that it’s very easy to get sucked into Microsoft and forget why you’re there or what you care about. My agenda then and now is the same:

Make developers’ lives better.

That’s what I tried to do at Intel, DevelopMentor and Microsoft and that’s what I’m going to try to do at Telerik. Thanks, Telerik for giving me a new home; I can’t wait to be there.

December 4, 2011

Roslyn Syntax Visualizer Tools

Roslyn Syntax Visualizer Tools

As I do more with Roslyn, I find I want more information about what I’m parsing and how it’s represented in the Roslyn object model. I could, of course, have built myself a little OM dumper for Roslyn, but instead I dug through the samples and found two cool ones built right in, both provided in the Documents\Microsoft Codename Roslyn CTP - October 2011\Shared folder.

Both Roslyn visualizer samples show a set of objects from the syntax part of the Roslyn API and the associated properties for the currently selected node as well as the associated text. The difference is only where the text comes from, a syntax tree or a text file.

Syntax Debugger Visualizer

The SyntaxDebuggerVisualizer sample (fully described in the associated Readme.html) allows you to create a new Visual Studio visualizer such that you can hover over a node from the syntax tree, click on the magnifying glass in the data tip and get this:

SyntaxDebuggerVisualizer

The text comes from the syntax tree parsed in the program. As each node in the syntax tree in the visualizer is selected, the associated properties are showed below and the range of text is shown on the right.

Syntax Visualizer Extension

The SyntaxVisualizerExtension sample (also described in its own associated Readme.html) shows the syntax tree from the current C# or VB file that’s open in Visual Studio. You get to the visualizer by loading the SyntaxVisualizerExtension sample project, starting the app (under the debugger [F5] or not [Shift+F5] as you choose), which starts another copy of Visual Studio. In this instance of VS, open a C# or VB source file and choose View | Other Windows | Roslyn Syntax Visualizer, which shows the following:

SyntaxVisualizerExtension

This version of the visualizer works just like the other one except that it gets the text from the current source code file. As you open different files, the visualizer window updates itself. As you change the selected node in the visualizer’s syntax tree, the associated code in the file is selected.

Both of these tools are very helpful for understand what’s been parsed by Roslyn. I personally like the debugger visualizer, as it’s always available without starting up a new instance of VS, but honestly I’m happy to have either, let alone both!

Update: Plus, there’s a cool tree view, too. Check it out!

November 26, 2011 tools .net

REPL for the Rosyln CTP 10/2011

REPL for the Rosyln CTP 10/2011

I don’t know what it is, but I’ve long been fascinated with using the C# syntax as a command line execution environment. It could be that PowerShell doesn’t do it for me (I’ve seriously tried half a dozen times or more). It could be that while LINQPad comes really close, I still don’t have enough control over the parsing to really make it work for my day-to-day command line activities. Or it may be that my friend Tim Ewald has always challenged csells to sell C shells by the sea shore.

Roslyn REPL

Whatever it is, I decided to spend my holiday time futzing with the Roslyn 2011 CTP, which is a set of technologies from Microsoft that gives you an API over your C# and VB.NET code.

Why do I care? Well, there are all kinds of cool code analysis and refactoring tools I could build with it and I know some folks are doing just that. In fact, at the BUILD conference, Anders showed off a Paste as VB command built with Roslyn that would translate C# to VB slick as you please.

For me, however, the first thing I wanted was a C# REPL environment (Read-Evaluate-Print-Loop). Of course, Roslyn ships out of the box with a REPL tool that you can get to with the View | Other Windows | C# Interactive Window inside Visual Studio 2010. In that code, you can evaluate code like the following:

> 1+1 2
> void SayHi() { Console.WriteLine("hi"); }
> SayHi();
hi

Just like modern dynamic languages, as you type your C# and press Enter, it’s executed immediately, even allowing you to drop things like semi-colons or even calls to WriteLine to get output (notice the first 1+1” expression). This is a wonderful environment in which to experiment with C# interactively, but just like LINQPad, it was a closed environment; the source was not provided!

The Roslyn team does provide a great number of wonderful samples (check the Microsoft Codename Roslyn CTP - October 2011” folder in your Documents folder after installation). One in particular, called BadPainting, provides a text box for inputting C# that’s executed to add elements to a painting.

But that wasn’t enough for me; I wanted at least a Console-based command line REPL like the cool Python, JavaScript and Ruby kids have. And so, with the help of the Roslyn team (it pays to have friends in low places), I built one:

RoslynRepl Sample Download

Building it (after installing Visual Studio 2010, Visual Studio 2010 SP1, the Visual Studio 2010 SDK and the Roslyn CTP) and running it lets you do the same things that the VS REPL gives you:

RoslynRepl

In implementing my little RoslynRepl tool, I tried to stay as faithful to the VS REPL as possible, including the help implementation:

replhelp

If you’re familiar with the VS REPL commands, you’ll notice that I’ve trimmed the Console version a little as appropriate, most notably the #prompt command, which only has inline” mode (there is no margin” in a Console window). Other than that, I’ve built the Console version of REPL for Roslyn such that it works just exactly like the one documented in the Roslyn Walkthrough: Executing Code in the Interactive Window.

Building a REPL for any language is, at you might imagine, a 4-step process:

  1. Read input from the user
  2. Evaluate the input
  3. Print the results
  4. Loop around to do it again until told otherwise

Read

Step 1 is a simple Console.ReadLine. Further, the wonder and beauty of a Windows Console application is that you get complete Up/Down Arrow history, line editing and even obscure commands like F7, which brings up a list of commands in the history:

replbeauty

The reading part of our REPL is easy and has nothing to do with Roslyn. It’s evaluation where things get interesting.

Eval

Before we can start evaluating commands, we have to initialize the scripting engine and set up a session so that as we build up context over time, e.g. defining variables and functions, that context is available to future lines of script:

using Roslyn.Compilers;
using Roslyn.Compilers.CSharp;
using Roslyn.Compilers.Common;
using Roslyn.Scripting;
using Roslyn.Scripting.CSharp;
...
// Initialize the engine
string[] defaultReferences = new string[] { "System", ... }; string[] defaultNamespaces = new string[] { "System", ... }; CommonScriptEngine engine = new ScriptEngine(defaultReferences, defaultNamespaces);
// HACK: work around a known issue where namespaces aren't visible inside functions
foreach (string nm in defaultNamespaces) {
  engine.Execute("using " + nm + ";", session);
}

Session session = Session.Create();

Here we’re creating a ScriptEngine object from the Roslyn.Scripting.CSharp namespace, although I’m assigning it to the base CommonScriptEngine class which can hold a script engine of any language. As part of construction, I pass in the same set of assembly references and namespaces that a default Console application has out of the box and that the VS REPL uses as well. There’s also a small hack to fix a known issue where namespaces aren’t visible during function definitions, but I expect that will be unnecessary in future drops of Roslyn.

Once I’ve got the engine to do the parsing and executing, I creating a Session object to keep context. Now we’re all set to read a line of input and evaluate it:

ParseOptions interactiveOptions =
new ParseOptions(kind: SourceCodeKind.Interactive,
languageVersion: LanguageVersion.CSharp6);
... while (true) { Console.Write("> "); var input = new StringBuilder(); while (true) { string line = Console.ReadLine(); if (string.IsNullOrWhiteSpace(line)) { continue; } // Handle #commands ... // Handle C# (include #define and other directives) input.AppendLine(line); // Check for complete submission if (Syntax.IsCompleteSubmission(
SyntaxTree.ParseCompilationUnit(
input.ToString(), options: interactiveOptions))) {
break;
}
Console.Write(". "); } Execute(input.ToString()); }

The only thing we’re doing that’s at all fancy here is collecting input over multiple lines. This allows you to enter commands over multiple lines:

replmultiline

The IsCompleteSubmission function is the thing that checks whether the script engine will have enough to figure out what the user meant or whether you need to collect more. We do this with a ParseOptions object optimized for interactive” mode, as opposed to script” mode (reading scripts from files) or regular” mode (reading fully formed source code from files). The interactive” mode lets us do things like 1+1” or x” where x” is some known identifier without requiring a call to Console.WriteLine or even a trailing semi-colon, which seems like the right thing to do in a REPL program.

Once we have a complete command, single or multi-line, we can execute it:

public void Execute(string s) {
  try {
    Submission<object> submission = engine.CompileSubmission<object>(s, session);
    object result = submission.Execute();
    bool hasValue;
    ITypeSymbol resultType = submission.Compilation.GetSubmissionResultType(out hasValue);

    // Print the results
    ...
  }
  catch (CompilationErrorException e) {
    Error(e.Diagnostics.Select(d => d.ToString()).ToArray());
  }
  catch (Exception e) {
    Error(e.ToString());
  }
}

Execution is a matter of creating a submission,” which is a unit of work done by the engine against the session. There are helper methods that make this easier, but we care about the output details so that we can implement our REPL session.

Print

Printing the output depends on the type of a result we get back:

ObjectFormatter formatter =
new ObjectFormatter(maxLineLength: Console.BufferWidth, memberIndentation: " ");
...
Submission
<object> submission = engine.CompileSubmission<object>(s, session); object result = submission.Execute(); bool hasValue; ITypeSymbol resultType =
submission.Compilation.GetSubmissionResultType(out hasValue); // Print the results if (hasValue) { if (resultType != null && resultType.SpecialType == SpecialType.System_Void) { Console.WriteLine(formatter.VoidDisplayString); } else { Console.WriteLine(formatter.FormatObject(result)); } }

As part of the result output, we’re leaning on an instance of an object formatter” which can trim things for us to the appropriate length and, if necessary, indent multi-line object output.

In the case that there’s an error, we grab the exception information and turn it red:

void Error(params string[] errors) {
  var oldColor = Console.ForegroundColor;
  Console.ForegroundColor = ConsoleColor.Red;
  WriteLine(errors);
  Console.ForegroundColor = oldColor;
}
public void Write(params object[] objects) {
  foreach (var o in objects) { Console.Write(o.ToString()); }
}

void WriteLine(params object[] objects) {
  Write(objects);
  Write("\r\n");
}

replerror

Loop

And then we do it all over again until the program is stopped with the #exit command (Ctrl+Z, Enter works, too).

Where Are We?

Executing lines of C# code, the hardest part of building a C# REPL, has become incredibly easy with Roslyn. The engine does the parsing, the session keeps the context and the submission gives you extra information about the results. To learn more about scripting in Roslyn, I recommend the following resources:

Now I’m off to add Intellisense support. Wish me luck!

November 19, 2011

Telerik: Best tech support response ever

I was playing around with the Telerik WPF controls the other day and I ran into an issue.” It wasn’t a bug, just a bet peeve of mine, so knowing that two friends of mine, Stephen Forte and Doug Seven, both work at Telerik, I thought I’d report it. I created an account on their support web site and dropped in the following message:

From: Chris
Date: 11/17/2011 10:59:38 AM

when I’m choosing setup options in the Telerik installer, I can select options by clicking on the little, tiny box but I cannot select options by clicking on the wide, giant checkbox label. I’d really love to be able to do the latter as well as the former. thanks!

Within 24 hours, I got the best developer tech support response I’ve gotten in 30 years in this industry (damn, I’m old):

From: Telerik Admin
Date: 11/18/2011 8:09:42 AM

Hi Chris,
A very valid point indeed, thanks for sharing your opinion!
The thing is a bit tricky in terms of UX actually and I’d love your input here. Let me add some details:
The idea is that the click on the label is used for highlighting the item (thus change the displayed images) and the click on the checkbox is used for checking it.
Now, there are three approaches (different than the current one) of which I like none (maybe prefer the third actually):

1. The label click both highlights and checks/unchecks the checkbox. I don’t like this one for two reasons: 1) if Telerik’ve set an item to be checked by default, we don’t want the customer to uncheck it mistakenly and 2) if the item is unchecked by default, the customer might just want the original stuff and he’d need a second click to uncheck it back.

2. Use double-click on the label to check/uncheck the checkbox. Don’t like it for it’s not intuitive and noone would use it. As an example, the Windows Platform Installer has such a feature and we discovered it a year after its initial release - when we started checking it deeper.

3. Only check/uncheck a checkbox on label click if the item has already been highlighted. The drawback of this approach is that you would need two clicks to have the item state changed the first time you’re on it. But still, this one seems kinda reasonable.

How do you find these?
Thanks,
Erjan Gavalji
the Telerik team

Explore the entire Telerik portfolio by downloading the Ultimate Collection trial package. Get it now >>

I have since learned that the Telerik developers support their own software and the benefits are obvious:

  • The reply was not wrapped in advertising chrome. There was a small Telerik plug at the bottom, after he’d addressed my issue.
  • The reply was not a canned response that we’ve gotten your support request.”
  • The reply was not a canned response that didn’t address my issue.
  • The reply made it clear that I was understood and acknowledged my issue as valid.
  • The reply came from a real person — Erjan — and he was willing to put his name on the email and take responsibility.
  • Erjan laid out the possible options he thought of to fix the issue and asked me which I thought I would like best.

Erjan from Telerik — you’re my new hero. Thanks for answering my question so thoroughly. I have confidence that you’ll take my initial feedback and my reply to this email (show selection when you click on the checkbox label like the VS2010 installer does) and make the product even better.

It’s no wonder Telerik is an award winning software vendor. You have to love a company that’s willing to be open with their developers.

Update: as of a few days later, a new installer was posted that included my fix suggestion. Wow!

March 1, 2011

Mary Sells, 1921-2011, Rest In Peace

Mary Hohnke Sells died on a Monday afternoon on the last day of February, 2010 in her home in Fargo, ND. She had just turned 89 years old on the 18th of February. She passed away peacefully in her sleep during an afternoon nap, having been tucked in by her daughter-in-law earlier that day. She’s survived by her son J. Michael Sells, her daughter-in-law Charlene Schreiber, her grandson Chris Sells and granddaughter-in-law Melissa Plummer and her two teenaged grandsons, John Michael Sells and Thomsen Frederick Sells. She was the last of four siblings; John, Shirley and Jim have all gone ahead of her to prepare the way.

Mary died a much beloved mother, grandmother and great grandmother as well as a dear friend to most everyone she met. She was generous of spirit, baking and cooking for her friends and family almost right up until the day she died, making sure her loved ones stayed plump in her love. She was talented in the kitchen, keeping her family recipes close to her heart for only those most special in her life. She was also a mischievous soul, taking advantage of her quick mind and her family’s sympathy for her ailments in later years to cheat outrageously at games of all kinds.

Mary was born in 1921, making her a child of the Great Depression. She graduated from Fargo’s Central High School in 1940, after which she pursued a course of study in Radiology Technology. She attained her national certification in 1946 and held it for 60 years. She married her late husband John Dickenson Sells in 1946, being secretly thrilled but outwardly scandalized when he insisted on public displays of dancing and other such tom-foolery. John worked for the Northern Pacific Railway and Mary worked in multiple clinical locations as she moved with her husband to sites ranging from North Dakota to Washington state. Her first child, Mike, is 61 and a successful draftsman at a local civil engineering firm. Her second child, Gretchen, died when she was only 15 in 1969, taking some of the light from Mary’s eyes. Her husband John was soon to follow, dying in 1971 of complications following gall bladder surgery.

None of this stopped Mary from living her life, however, having gone to Seattle in 1978 to be closer to her sister Shirley and to follow her career, then moving back Fargo in 1983 to be with her son and grandson. Before her move to Seattle, Mary helped take care of her grandson during the summers when he would visit. She was a second mother to him, doting on him and spoiling him thoroughly his whole life with food and attention. Till the day she died, Chris was her baby boy,” in spite of his age of 41 and his height of 6’5.

Mary was a member of the Mecca Chapter of the Order of the Eastern Star and received a 50-year membership acknowledgement for her many years of service. She was a life-long member of St. Mark’s Lutheran Church, an active member in the Women of the Evangelical Lutheran Church in America and a church quilter for most of those years. Mary was also an active member of PLS and enjoyed those friendships immensely.

Later in life, when her sister Shirley was diagnosed with cancer, Mary and she, both in their 70s, made sure that Shirley’s bucket list” was fulfilled, which included wine tours, roller coasters in Las Vegas and even, for Mary, an incident on a tipped raft in the Rouge River in Oregon. She lived a full, rich life on her own terms, never shy about what she wanted for herself and others, and always ready with advice, wanted or not.

Mary died in her own home while she still had the faculties to interact with the ones she loved, as she wanted. She will be deeply missed and felt daily in the hearts and minds of those she left behind.

January 15, 2011

The Basics of EF Validation: IDataErrorInfo

The Basics of EF Validation: IDataErrorInfo

When you’re adding or updating data in your database, you really want to make sure that the data being sent to the database is good and true. Often, that’s something that can be checked in the database itself. The first thing you’ll want to do is make sure that the database has validation constraints set on the columns, like nullability or max data sizes. If you’re going EF model-first, you can set these properties on the properties of your entities. If you’re not, you can set these properties in the database or get even fancier and write triggers that check the validity of the data. Finally, you can disable insert, update and delete altogether in favor of stored procedures, changing your EF mapping to generate calls to those instead. The nice thing about checks in the database is that no matter how the data gets there, whether it’s via your EF-based app or not, the checks happen.

However, if you’d like to also put checks into your EF code, perhaps because you’d like to avoid a round-trip to the database for bad data, you can do so in your EF-enabled language of choice, e.g. C#.

Imagine a very simple EDM to describes web advertisements:

image_thumb4

All properties on our entity type get a generated On<>Changing and On<>Changed method. If you want to check one property in isolation, it’s easy to provide an implementation of your partial method of choice, e.g.

namespace EdmTest {
  partial class Ad {
    partial void OnLinkChanging(string value) {
      if (!value.StartsWith("http://",
StringComparison.InvariantCultureIgnoreCase)) { throw new ArgumentOutOfRangeException("Link must start with 'http://'"); } }
} }
Here we’ve made sure that whenever we set the Link property, it must be of a certain format. If we were to violate that restriction, things go boom:

clip_image002

As MVC translates the form fields into values on the Ad object that is passed to the controller’s Create method, setting the Link property with a bad value triggers the exception:

// POST: /Ad/Create
[HttpPost]
public ActionResult Create(Ad ad) {...}

MVC catches the exception before the Create method is even called and the view shows the error message. Notice that the error message is not what we provided, however.

Further, sometimes there are problems on an object’s state that span more than one property. Unfortunately, because such a constraint can’t be checked on any single property change, we need to at least check it at the object level, not just at the property level.

For both of these issues, we have IDataErrorInfo.

IDataErrorInfo

The IDataErrorInfo interface was introduced back in the mists of time with .NET 1.x for use specifically with data binding in Windows Forms. I wouldn’t recommend that anyone invest in anything but maintenance on their WinForms apps, but ASP.NET, the Windows Presentation Foundation (WPF) and Silverlight all support IDataErrorInfo . Data binding is involved enough and GUI-framework-specific enough that you’ll need to read up on it in your favorite GUI-framework-specific book, but the IDataErrorInfo interface is exactly what we need even without data binding:

namespace System.ComponentModel {
  public interface IDataErrorInfo {
    string Error { get; }
    string this[string columnName] { get; }
  }
}

Notice that IDataErrorInfo exposes error descriptions at both the object and the property/column level. It’s easy to implement this standard interface on our example Ad class:

partial class Ad : IDataErrorInfo {
  public string Error {
    get {
      // Check the ad for errors
      if (string.IsNullOrEmpty(Title) && string.IsNullOrEmpty(ImagePath)) {
        return "Must set Title or ImagePath";
      }
      return null;
    }
  }




public string this[string columnName] {
get { // Check any specific property for errors switch (columnName) { case "Link": if (Link != null && !Link.StartsWith("http://",
StringComparison.InvariantCultureIgnoreCase)) { return "Link must start with 'http://'"; } break; } return null; } } }

Because each of the generated entity classes is partial, you can provide your own implementation to be merged with the generated implementation, in our case the IDataErrorInfo interface implementation. Now, when MVC hydrates an object that implements IDataErrorInfo, it’ll check to see if there are problems. To check, our controller provides the ModelState property, which itself provides the IsValid flag:

// POST: /Ad/Create
[HttpPost]
public ActionResult Create(Ad ad) {
  try {
    if (!ModelState.IsValid) { return View(); }
    ...
  }
  catch {
    return View();
  }
}

// POST: /Ad/Edit/
[HttpPost] public ActionResult Edit(Ad ad) { try { if (!ModelState.IsValid) { return View(); } ... } catch { return View(); } }

In addition to the IsValid flag, the ModelState provides a list of property name/error message pairs that show in the code that the view generator spits out when it’s creating forms, e.g.

...
<% using (Html.BeginForm()) {%>
<%: Html.ValidationSummary(true) %>
...
<%: Html.TextBoxFor(model => model.Link) %>
<%: Html.ValidationMessageFor(model => model.Link) %>
...

It’s in the ValidationSummary helper that shows object-level errors and the ValidationMessageFor helper that shows property-level errors:

clip_image004

clip_image006

There are other means of validation that you will want to investigate, like the validation attributes supported by MVC and SilverLight, but IDataErrorInfo is the one with the broadest reach. It’s also the one that’s simplest for you to check yourself if you’re not getting the automatic support you want from your GUI framework. For example, because the SaveChanges method on the context base class is virtual and because we’ve got the Object State Manager, we can check ourselves for object errors using IDataErrorInfo:

using System.Data.Objects;

namespace AdMan.Models {
  partial class sbdbEntities {
    public override int SaveChanges(SaveOptions options) {
      // Make sure we're detecting all changes. base.SaveChanges
      // does this, but that may be too late.
      DetectChanges();
// Get all the new and updated objects var objectsToValidate = ObjectStateManager. GetObjectStateEntries(EntityState.Added | EntityState.Modified). Select(e => e.Entity).OfType<IDataErrorInfo>();
// Check each object for errors foreach (var obj in objectsToValidate) { // Check each property foreach (var property in obj.GetType().GetProperties()) { var columnError = obj[property.Name]; if (columnError != null) { throw new Exception(columnError); } }
// Check each object var objectError = obj.Error; if (objectError != null) { throw new Exception(objectError); } }
// All clear return base.SaveChanges(options); } } }

Here we’re overriding the SaveChanges method we’ve been calling all this time to take advantage of the IDataErrorInfo interface. The first call to DetectChanges is to make sure we’ve gotten all the changes into the Object State Manager (only necessary if you’re using EF POCO classes). The call to the GetObjectStateEntries method on the ObjectStateManager class produces all of the added and modified objects, their state, what the old and new values are, etc. We pull off each one of the entities that implement IDataErrorInfo and call the methods to check for property and object-level errors. If we find one, we throw an exception, otherwise we let the call to SaveChanges through.

This code isn’t needed if you’re already using a GUI framework that supports IDataErrorInfo, but it’s still handy to know you can roll your own code into SaveChanges if you need to.

Where Are We?

IDataErrorInfo is the core of data validation support in GUI libraries since .NET 1.x and while there are simpler ways to do it for individual libraries, IDataErrorInfo works just fine with MVC and EF, two of the most popular GUI libraries we’ve got just now.

January 15, 2011

EF Concurrency Mode Fixed + MVC

EF Concurrency Mode Fixed + MVC

Imagine a very simple EDM to describes web advertisements:

image

Now imagine that I’d like to build a web application to manage instances of the Ad type. If multiple people are editing ads at once, especially the same set of ads, I’m likely to run into concurrency errors. By default, EF lets the last change win.

For example, if Chris and Bill are both editing Ad.Id == 1, if Chris pushes his changes to the database first, EF will not notice that the ad has been updated underneath Bill will he saves his changes and Chris’s changes will be lost. What we really would like to happen is that, when Bill attempts to save his changes, that we check if the data has changed since we cached it so that Bill gets an error and is able to merge his changes in with Chris’s.

This style of multi-user concurrency management is called optimistic concurrency” because it assumes few people will be changing the same data at the same time. It’s the most efficient means of concurrency management when that condition is true. Another type of concurrency management is named pessimistic concurrency,” and is generally implemented using locks on the database, which tends to slow things down.

By default, EF provides no concurrency support; if two people push changes to the same row in the database, whoever’s change goes in last wins. This results in data loss, which in the world of data is a big, fat, no-no.

The way that EF lets you decide how a row is changed is via the Concurrency Mode property on every one of the entity’s properties in the designer. By default, the Concurrency Mode is set to None”, which results in SQL like the following when an update is needed:

update [dbo].[Ads]
set [Title] = @0, [ImagePath] = @1, [Link] = @2, [ExpirationDate] = @3
where ([Id] = @4)

The Id column is used to select whether to perform an update, so any changes made to the underlying columns for that row are not detected and are therefore lost. The way to tell EF which columns to check is with the Concurrency Mode property set from None (the default) to Fixed on an entity’s property. For example, if you set Concurrency Model to Fixed for each of the read-write properties for our sample Ad entity, the update would look like the following:

update [dbo].[Ads]
set [Title] = @0, [ImagePath] = @1, [Link] = @2, [ExpirationDate] = @3
where ((((([Id] = @4) and ([Title] = @5)) and [ImagePath] is null)
and ([Link] = @6)) and ([ExpirationDate] = @7))

This is handy, but it also requires that we keep around an entity in memory in both its original state and its updated state for the length that the user is editing it. For desktop applications, that’s not an issue, but for stateless web pages, like MVC-based web pages, it is.

It’s for this reason that the EF team itself recommends using a special read-only column just describing the version” of the row. Ideally, whenever any of the data in a row changes, the version is updated so that when an update happens, we can check that special column, e.g.

update [dbo].[Ads]
set [Title] = @0, [ImagePath] = @1, [Link] = @2, [ExpirationDate] = @3
where (([Id] = @4) and ([TimeStamp] = @5))

Here, the TimeStamp column is our version” column. We can add such a column in our SQL Server database using the timestamp” type, as shown in SQL Server Management Studio here:

clip_image001[8]

The semantics of the timestamp type are just what we want: every time a row is updated, the timestamp column is updated. To see this new column in the Entity Data Model, you’ll have to right-click on the designer surface and choose Update Model from Database, which results in the TimeStamp being added to our model:

clip_image003

The TimeStamp field will come through as type Binary, since EF4 doesn’t have direct support for it, and with a StoreGeneratedPattern of Computed (which is exactly right). To enable EF to use the new column to perform optimistic concurrency, we need only change the Concurrency Mode to Fixed.

Now, here’s a simple Edit method on our MVC controller:

// GET: /Ad/Edit/5
public ActionResult Edit(int id) {
  return View(db.Ads.Single(ad => ad.Id == id));
}

This kicks off the view, but with one key missing ingredient — the view doesn’t have the TimeStamp field in it; because it’s mapped in EF as binary data, the MVC form generator wouldn’t provide a field for it. To make sure we pass the version of the data along with the data itself, we have to add a field to our HTML form and, because we don’t want the user to see it, let alone edit it, we need to make it hidden:

<% using (Html.BeginForm()) {%>
...
<%: Html.HiddenFor(model => model.TimeStamp) %>
...
<% } %>

The Html.HiddenFor is an MVC helper that produces HTML that looks like so:

<input id="TimeStamp" name="TimeStamp" type="hidden" value="AAAAAAAAB9E=" />

Now, when we press the Save button, the SQL we saw earlier is invoked to use the ad’s unique ID as well as the version (our timestamp column). If there’s a concurrency problem, i.e. somebody else has updated the underlying row in the database since we cached our values on the HTML form, we get an exception:

clip_image004

The message is saying that no rows were updated, which happens when the timestamp of the underlying row no longer matches. To provide a more helpful message, you’ll want to catch the specific error yourself:

// POST: /Ad/Edit/
[HttpPost]
public ActionResult Edit(Ad ad) {
  try {
    if (!ModelState.IsValid) { return View(); }
    // Attach the ad to the context and let the context know it's updated
    db.Ads.Attach(ad);
    db.ObjectStateManager.ChangeObjectState(ad, EntityState.Modified);
    db.SaveChanges();
    return RedirectToAction("Index");
  }
  catch (OptimisticConcurrencyException ex) {
    ModelState.AddModelError("", "Oops! Looks like somebody beat you to it!");
    return View(ad);
  }
}

Here we’re catching the OptimisticConcurrencyException and setting our own message before sending the user back to their data for them to grab what they want and try again.

Where Are We?

EF works great with MVC, but in the case of optimistic concurrency, you’ve got to work around the stateless model of the web a little to get it working just the way you like.


← Newer Entries Older Entries →