Office Fabric UI Snippets for VS Code

Update for Release 1.2: http://blog.mannsoftware.com/?p=2491

Update for Release 1.0: http://blog.mannsoftware.com/?p=2371

Release Date: Feb 26, 2016

Status: Beta

Provided By: Sector 43 Software (David Mann)

Details: s43.io/FabricSnippets

Repo: https://github.com/Sector43/FabricSnippets

Based on: Fabric release 2.01 (Feb 5, 2016)


These snippets are a first go at making the Office UI Fabric easier to use. In general, the HTML is taken directly from the Office UI Fabric GitHub repository, with some tweaking.

Snippets generally fall into one of two flavors:

  • Simple Components: these have no JavaScript elements and so the snippet just expands into the correct HTML to render the component
  • Complex Components: these aren’t really “complex” they simply have some JavaScript associated with them. In these cases, there is a snippet for the component and one for an example of the JavaScript required to make the component work.

All of the snippets have a trigger that starts with uif- so you can see what’s available by simply typing uif- and looking at the Intellisense popup shown by VS Code.

Installation Instructions

For the time being, installation is manual. Once I’ve ironed out any bugs, I’ll make a true VS Code extension and deploy it to the Extension Gallery. At that point, I’ll also create a Visual Studio Extension and deploy that to the Visual Studio Gallery as well. If there is interest, I’ll convert the snippets to other editors – Sublime is one I’m considering, but am open to other suggestions as well.

To Install:

  • Open the file VSCodeFabricSnippets.txt from the GitHub repo (direct link: https://raw.githubusercontent.com/Sector43/FabricSnippets/master/VSCodeFabricSnippets.txt)
  • Select all of the contents and copy it to the clipboard
  • In VS Code, click File | Preferences | User Snippets and select HTML from the Language dropdown
  • Paste the contents from the GitHub file into the html.json file that is now open in VS Code
  • Save the html.json file and close it

The snippets are now available when you are editing an HTML file in VS Code. (NOTE: Snippets in VS Code only seem to work if the HTML file is open as part of a folder, not if you just open a standalone file. I’m looking into whether this is really true, and if so whether it is by design or a bug)

Installation and usage is shown in the short video here: https://youtu.be/VsfUTwgNdgg

Known Issues

The following components are not currently supported by the snippets:

  • Facepile
  • People Picker
  • Persona
  • Persona Card

They’ll be coming in the next release.

Please report other issues here: https://github.com/Sector43/FabricSnippets/issues

Announcing Fabric Explorer

Update March 3, 2016: http://blog.mannsoftware.com/?p=2171


 

Microsoft released the Office UI Fabric in August of 2015.  It is essentially “Bootstrap for Office, Office 365 and SharePoint” plus more.  It allows you to quickly and easily build a user interface for your add-in or app that looks and feels like Office, SharePoint, or Office 365.

Fabric Explorer is a Chrome Extension I wrote which allows you to explore the UI Elements of Fabric  within a live web page right inside Chrome.  The extension is featured heavily in my upcoming Pluralsight course Introducing Office UI Fabric (coming out in the next week or so) and will also be used in part two – Developing with the Office UI Fabric (due out in March).

Here is a screenshot of Fabric:

2016-02-11_9-22-43

You can see a demo of it in action here: https://youtu.be/e8v-Zw1iRZs.  The extension itself is available in the Chrome Web Store: https://chrome.google.com/webstore/detail/fabric-explorer/iealmcjmkenoicmjpcebflbpcendnjnm .

I’ll probably update it with some additional functionality in the next few weeks, so feel free to provide suggestions in the Comments.  See updates at top of post.

SPRESTSearchParser

As part of my Pluralsight course on programming SharePoint 2013 with JavaScript, I tossed together a quick little utility to make search programming a little easier.  It’s all detailed in the course, but the source code is available on GitHub: https://github.com/Sector43/SPRestSearchParser

Basically, it works like this:

  1. Make your REST call to get search results from SharePoint
  2. Create an instance of the S43.SearchResultParser object:
    SNAGHTML19611eb
  3. Take the parameter returned from SharePoint (In the image its oneSample ) and pass it to the parseResults method in the SPRESTSearchParser object, as shown in the second line in the above image 
  4. The return value from that method call is a JSON object which contains an array of SearchResult objects.  Each SearchResult object is simply a representation of a single result from SharePoint that is easier to work with than the raw results.  It looks like this in the Chrome dev tools:
    image
    Each property from the result set is available directly on the SearchResult object within the array by name – much easier than remembering the position of each property in the result set.
  5. Right now the following properties are available from the PrimaryQueryResult.RelevantResults collection:
  • rank
  • docId
  • title
  • author
  • size
  • url
  • description
  • created
  • collapsingStatus
  • hitHighlightedSummary
  • contentclass
  • fileExtension
  • contentTypeId
  • parentLink
  • isDocument
  • lastModified
  • fileType
  • isContainer

Adding new properties is simply as easy as adding them as new properties to the SearchResult object definition in SearchResult.ts (yes, this is written in TypeScript).  One important thing to realize is that the property names on the SearchResult object must follow the typical JavaScript coding convention and start with a lowercase letter.

 

Accessing the results now looks like this:

SNAGHTML19c8b44

In the future, I’d like to clean this up and expand it to make use of additional result sets and other search capabilities.  Feel free to fork the code and adapt to meet your needs.  If you do something interesting, please submit a pull request to allow others to take advantage of it as well.

Getting Started with Visual Studio Code

Blah, blah, blah…intro to VS Code and why you should use it…blah, blah, blah.

If you’re here, I’ll assume you know what VSC is and are at least planning on kicking the tires to see what it’s all about.  This article is as much for me as you – a place where I document all of the little steps I needed to go through to get things set up the way I wanted.  Here’s basically what we cover:

  1. Configure Git, including credential caching for GitHib
  2. Configuring Karma
  3. Configuring TypeScript support

Let’s get started…

Configure Git, including credential caching for GitHib

VSC has support for Git built in:

image

Click there to go to the Git Panel.  The first time you visit, it will look like this:

image

Click the blue “Initialize git repository” button to, well, initialize your git repo.  Enter a Commit message and click the checkbox.  Your files are now committed to the local repo.  So how do we push them to a remote (GitHub) repo?

Easy:

  1. From a command prompt, and assuming you have Git tools installed that provide command line support, enter the following two commands:
    git remote add origin your_remote_git_repo_urlgit push -u origin master
  2. Depending on how your remote repo is initially configured, you may get an error here saying that basically the remote repo isn’t empty.  You can overcome that by doing a “pull” first or a “sync”
  3. You’ll be prompted for your credentials – enter them for now, we’ll get them cached next.  When the second command is done, you should be able to see your commit in your remote GitHub repo
  4. You can now also use the menu options  in VSC to interact with your GitHub repo:

SNAGHTML5b1630b9

Caching GitHub Credentials

I had Git Desktop installed already and I’m not sure if that is required for this or not.  Try it without but then if it doesn’t work, try it with.

From a command prompt (again, assuming Git tools installed and in PATH), run the following command:

git config --global credential.helper wincred

Your credentials should be cached and you should no longer be prompted by VSC again every time you connect to GitHub.

Configuring Karma

Nothing really special to configure Karma for VSC.  When I’m coding, I typically just run

start Karma

from a command prompt in my project folder (assuming that’s where my karma.conf.js file is located) and let it run in the background. Eventually I’ll get around to automating the start and stop of Karma with gulp, but this is good enough for now.  I configure Karma to watch my files so it fires every time I save.

Configuring TypeScript support

You can skip this step if you don’t use TypeScript.  I do, so:

  1. Use the instructions in the following two links to configure things:
    • https://code.visualstudio.com/Docs/languages/typescript
    • http://blogs.msdn.com/b/typescript/archive/2015/04/30/using-typescript-in-visual-studio-code.aspx
  2. For reference, my tsConfig.json file looks like this:
    
    {
      "compilerOptions": {
         "module": "amd",
         "noImplicitAny": false,         
         "removeComments": true,
         "preserveConstEnums": true,
         "declaration": true,
         "sourceMap": true,
         "target": "ES5",
         "watch":true
       }
    }
    
  3. The top of my tasks.json files looks like this:
    
    {
    	"version": "0.1.0",
    	"command": "tsc",
    	"isShellCommand": true,
    	"showOutput": "always",
    	"args": [
    		//add your .ts files here		
    	],
    	"problemMatcher": "$tsc"
    }
    

    Nothing below that was changed from the default

That’s about it.  I’ll probably flesh this out later, but for now this is my notes for what I needed.  Hopefully that helps someone. 

All Developers Can Be SharePoint Developers

That’s the title for a full-day session I’m doing at the upcoming Philly Code Camp (October 10, 2015, Philly MTC in Malvern – more information).  Registration is only $76 and includes breakfast and lunch, raffles, guaranteed admission to the Saturday regular Code Camp (which will sell out), and more.  Register here.

Here’s the synopsis for the day:

Until recently, SharePoint was the red-headed stepchild of the Microsoft web development world, requiring intimate knowledge of SharePoint and it’s gargantuan object model. This made SharePoint a very specialized skill-set that stood off to the side on its own. It’s lack of support for standard web-development practices made it difficult for new developers to learn – or to even want to learn.

Fortunately, this has changed.

SharePoint 2013 has joined the rest of the web-development world – supporting standard technologies such as REST, and languages ranging from .NET to Perl, PHP, Ruby, etc. JavaScript is now a first-class citizen. Gone are the days when “doing SharePoint” required extensive, arcane knowledge. For developers, SharePoint is now just another data store accessible via a robust, REST-based API.

In this full-day session, SharePoint MVP David Mann will introduce the SharePoint REST API and show how it can be used to build client-side applications using JavaScript, utilizing popular frameworks such as Angular and libraries such as JQuery, common tools like Node, Bower, Gulp and Yeoman, as well as rich-client applications using C#. This includes both on-premises installations of SharePoint as well as cloud-based environments such as Office 365.

Bring your laptops as this is a hands-on session. No SharePoint experience is necessary, but basic programming skills would be a big help. This is a great opportunity to add SharePoint to your toolkit, or to take your SharePoint skills to the next level. Individuals and teams are welcome.


I’m still tweaking the material, but here is the current agenda:

  1. Introduction & Getting Started (including why SharePoint? )
  2. Setting up the Environment
  3. Setting up the Environment (Lab)
  4. Tools & Frameworks
  5. Tools & Frameworks (Lab)
  6. The SharePoint REST API
  7. Hello (SharePoint REST) World (Lab)
  8. CRUD Operations
  9. CRUD Operations (Lab)
  10. Provisioning
  11. Provisioning (Lab)
  12. Language Agnostic – C#, Ruby, [Insert favorite language here]
  13. Language Agnostic (Lab)
  14. Wrap Up and Next Steps

I’ll post more details as we get closer.

To follow along with the labs, you’ll need:

  • a laptop with Visual Studio (2013 or 2015)  installed

That’s it.  You will not need SharePoint installed

Critical Path Training Webinar Material

I completed my Critical Path Training webinar earlier today.  All in all, I think it went pretty well.  There was a TON of material to cover, and as usual, not enough time to cover it all.  I’ve posted the source code I showed to Git (https://github.com/Sector43/CPTJSWebinar) and you can clone it from here:  https://github.com/Sector43/CPTJSWebinar.git.

In the webinar I showed a sample application which manipulated Views in SharePoint to show an icon in place of a column value based on the value of that column for the item in the view.  It wasn’t a fancy demo, but the code has a lot of good stuff in it – including a full suite of tests, use of the revealing module pattern, IIFEs, defensive coding, dependency injection, etc. 

Once you’ve pulled the code down, there are a few steps you’ll need to take to make everything work.  As I said in the demo, this is not beginner-level stuff, so I’m assuming you’re familiar with VS, node, NuGet, etc. or can figure out what you need.  Here’s a summary of what you need to do

  1. Install node – www.nodejs.org
  2. Add an environment variable for CHROME_BIN pointing to the folder where you have Chrome installed.  You can also add one for FIREFOX_BIN if you’d like to use Firefox for testing in Karma.
  3. Add the Karma Visual Studio plugin from here: https://visualstudiogallery.msdn.microsoft.com/bfe6feb7-7ec4-4e8e-9d90-cf6ea2cd2169  (only if you want to see the test results in the VS Test Runner window).
  4. In Visual Studio, toggle the “Run Tests on Build” option on in the Test Explorer Window
  5. Restore NuGet packages
  6. Install node packages (from a command prompt in the root of your project):
    • npm install karma-cli
    • npm-install karma
    • npm install karma-chrome-launcher
    • npm install karma-xml-reporter
    • npm-install karma-jasmine
    • npm-install jasmine-jquery
  7. Edit the karma.conf.js that is in the source code if necessary
  8. From the command prompt in the root of your project: karma start

Karma should now launch in a command window, launch Chrome and then run all of  the tests in the project.  They should all pass initially.

Feel free to kick the tires on the code and give it all a good workout.  If you find bugs, please issue a pull request and I’ll work them back into the repo.

The LogManager and REST utilities are included in here but will eventually be wrapped into the SPClientLib project I’m working on (which will also be released on GitHub).

As I said in the webinar, this is what I would consider “production-ready” code, but PLEASE review it and understand what it is doing before using any piece of it as-is.  It is released under the MIT license so you are free to use it for any purpose, but you must keep attributions intact and please share back any improvements.

Dave

Office 365 Update

I’m presenting at my user group (TriState SharePoint) tonight for our periodic session covering the latest updates to Office 365.  All of the content for this is coming from my Office 365 Concierge newsletter, but with demos and more detailed discussions on some topics.  The slide deck from the session is available here: http://blog.mannsoftware.com/wp-content/uploads/2015/02/Office%20365%20Update%202-2015.pdf

Part 2: Loading JavaScript Libraries (It Still Shouldn’t Be This Hard)

My post on Sunday seems to have struck a little bit of a nerve.  A number of folks added comments to the blog post and a few others commented directly to me via email or other channels.  In this post, I’m going to quickly summarize and respond to some of the comments and then move on to step two – trying to formulate thoughts for moving forward, if there is in fact a path to move forward any further.

OK, here we go, in no particular order…

(Oh, yeah, perhaps this goes without saying, but these are largely opinions based on my experience and stories I hear from other developers.  YMMV, and I’d *love* to be proven wrong about some of these, but let’s make sure to keep the discussion productive and moving towards a solution).

  • Marc’s concept of “Functions as a Service” (FAAS) is interesting and one I think should be investigated and fleshed out.  I haven’t read the book chapter Marc references, so maybe he’s already done that…Marc?
  • Hugh’s mention of JSFiddle as a case study is *exactly* what I had in mind for a part of this.  For small-scale customizations implemented by an end user, selecting a library or two needs to be as dirt simple as JSFiddle.  I think this ties in to FAAS as well.
  • As a few folks mentioned, this is not entirely a technical problem.  Regular old Governance comes into play in a big way.  Too, if anything comes of this, we must get a groundswell of community support behind it as a “standard”.
  • This is perhaps a semantic argument, but contrary to what a few folks posted, I think there *is* a one-size-fits-all solution, or at least one-size-fits-most – there’s always going to be edge cases that don’t work.  The semantics comes in because I’ll call it one solution with a couple of facets, but all falling under one umbrella.
  • SPO (Office 365) and MDS must be addressed
  • Supporting MDS is somewhat difficult, and more than just not polluting the global namespace
  • RequireJS is a great example of managing dependencies and loading modules. I use it frequently, as do many developers. Unfortunately, as it stands right now, it is not MDS-compliant (it creates script nodes using CreateElement)
  • Loading multiple versions of the same library is a performance and memory use problem, but also breaks some libraries (like JQueryUI, and probably a bunch of other JQuery plugins, though I haven’t tested them).  Namespacing doesn’t fix all of this.
  • The other problem with loading multiple libraries is also compounded by a lack of control over load order.  SharePoint has too many ways to load libraries and they all load at different times.
  • A comment was made that “most companies would not want to advertise the existence of a “standard library loader” to general users… since that would implicitly suggest that ad-hoc customization is ok…maybe even supported” which part of me agrees with, but a bigger part of me thinks those companies are just burying their heads in the sand.  Unless they somehow hide the Script Editor webpart (which some do, I know) they have people doing customizations and ignoring it doesn’t make it go away.  They’d be better off embracing and providing some guidance/support.  This also plays into my “one solution with many facets” approach – companies can choose which pieces they wish to implement to cover their scenarios.
  • Another comment was “As a developer you should be working to make YOUR js libraries isolated from anyone else.”  I think this is wrong.  This is not being a good corporate citizen.  This is the “too hell with everyone else” scenario in my original post.  I agree that you should isolate your code as much as possible, but unless everyone is there can still be problems.  Too, your code is not the only code running in the environment.  The only exception to this I can think of is if you are developing an application as an ISV or internal developer/consultant and your application is entirely self-contained – you control the horizontal and the vertical.  However, if you develop even one web part as part of your application, you don’t have that level of control – users could add that webpart outside of your application.  If you integrate with OOB SharePoint – even just using its Master Page – you don’t have that level of control.  If your application allows Custom Actions to load, you don’t have this level of control…etc., etc., etc.  Even if your stuff is perfectly insulated and always works, it may break something else or cause some other problem.
  • A few folks commented along the lines of user’s maintaining/understanding/impacting their code.  This falls under the previous bullet.  In most cases, your code is not the only custom code running in the environment
  • A big part of the problem is that we cannot say that this is only of concern to professional developers.  While I personally dislike the term “citizen developer” I agree with the fact that non-developer users are customizing SharePoint and using JavaScript they cobble together from the internet or simple trial and error, and they too often “ship it” as soon as it appears to do what they want with no additional testing or documentation.  This ship has sailed and there’s no way we’re bringing it back to port, much as we might want to.  (And yes, I realize that some of those “citizen developers” do very good work – better than some developers – but I’m mostly concerned with those who don’t)
  • I have not specifically noticed the “jarring effect” some folks mentioned, but maybe I’m just missing it.  I’ll dig out some code I wrote that should show it and try to pay more attention.

I think that sums up my responses and follow-on comments.  Now to move forward…

As I see things in my twisted little mind, here’s what we need, from a technology point of view (governance is next, don’t worry).  This is very much of a blue sky wish list and I realize that some of it may not be feasible/possible.

  • A replacement for the Script Editor web part that simplifies things for end users to allow them to pick the libraries they need and guides them towards some minimal “best practices” – a very lightweight linter of some sort that enforces a few basic rules would be a good start.  Optionally blocking inline script tags would also be nice (so MDS doesn’t break).  I think 99.9% of the time, simply telling them that JQuery is already available will be sufficient, so this doesn’t have to be fancy.
  • A way to track which libraries and versions are available globally (scoped to a given web, I think)
  • A way for developers to specify/discover which libraries are loaded and the order in which they load
  • A fast, reliable way to load and initiate a script loader which can then load other required scripts
  • An MDS-compliant way to specify dependencies between modules and load order

I’ve probably missed a few things, but that’s a start.

For governance, here’s my wish list:

  • Documentation of whatever the technology solutions are
  • Consensus amongst developers that whatever “this” is, it is the recommend way to proceed (although I acknowledge this is like herding cats and will never be close to 100%)
  • Training/education material for end users and developers

So where do we go from here?  Next is a viability assessment of the technology wish list.  What’s possible?  What’s feasible?  After that, if enough of the technology can be made to work then we kick off a community project.  Or we talk to Microsoft about carving off a little corner of the PnP world for this effort and get a little help from the mother ship (which would also go a long way in getting buy in from developers and companies).

This may all very well be a pipe dream, but I’m somewhat known for tilting at windmills (and metaphors, mixed or otherwise) but if we don’t at least try to come up with a solution to a problem, all we’re doing is complaining and that’s just wasted energy.

As always, comments welcome.

Dave

Loading JavaScript Libraries – It Shouldn’t Be This Hard

I’ve been doing a lot of JavaScript programming for the past few years. SharePoint 2013 has only accelerated the pace. Currently, I split my time roughly evenly between SharePoint and non-SharePoint work at my primary client, but both projects make heavy use of JavaScript, between them making use of many of the popular JavaScript libraries – Angular, JQuery, JQueryUI, Moment, Require, Knockout, etc., etc., etc.

The disparity between the two worlds is still quite wide. Even something as simple as loading up required libraries is more difficult in the SharePoint world than in the rest of the world. Part of this, I realize, is due to the nature of SharePoint. It is a much larger application than many other non-SharePoint applications and lends itself to extensibility much more than those non-SharePoint applications. There are often far more people involved as potential customizers of SharePoint than with non-SharePoint applications

And that’s where the trouble begins.

Extensibility means a loss of control. Even in controlled environments that actually effectively do SharePoint governance, extensibility invites problems. JavaScript is still a little bit of the Wild West; every project is going mandate the use of different libraries, different versions and different configurations. If all I’m concerned about is my project, things are easy: I load the libraries I need using one of any number of script loaders or script-loading approaches. To hell with anyone else.

That, obviously, is not a very good approach, but it is one taken all too often. I was recently asked to look into a problem at a client that boiled down to them having a ridiculous number of JavaScript libraries loaded on too many pages:

  • 4 different versions of JQuery
    • One page actually loaded JQuery 6 times, as poorly trained site admins, or users with permissions, were adding Script Editor Web parts on pages and each loading JQuery, from different document libraries and CDNs
  • 2 different versions of JQueryUI
  • 2 versions of Knockout

This is in an environment that at least makes an effort at governance. Unfortunately, too much of it is left in the hands of developers, but this isn’t a post about governance directly. Instead, it is an attempt to try and figure out what we as developers and architects should do.

As a consultant and ISV, I cannot mandate too strictly how my client or user’s environment be configured. I can say something like – if you don’t have JQuery installed and loaded, we will install and load it. But that’s about it. That may be fine for the moment my customization (either custom built as a consultant or a product I sell as an ISV) is added to the environment, but I have no control over what happens later. If, three days after my customization is added, another product or customization is added which loads its own version of JQuery we may have a problem – especially with the somewhat recent split of compatibility between JQuery 1.x and 2.x. The one that prompted me being called in to that client was caused by a customization loading a new version of JQueryUI, one which contained a different custom collection of widgets. It broke a different customization.

What’s the solution? I don’t have a good answer, but I’m working on it and invite collaboration. If we can come up with something good, this would be a killer community-driven project. Here are the problems, as I see them:

  1. We need some way to track which libraries and versions are available in an environment. This can’t be “just” documentation or offline means – code should be able to check and respond appropriately. Somehow, this needs to be global across the environment and future-proof.
  2. MDS must be supported. As a consultant or ISV, I cannot dictate that MDS be turned off and have that remain inviolate for all time. I know many developers prefer to just turn it off, but it is a feature of SharePoint and therefore must be supported.
  3. There must be a standard for how libraries are loaded so you can anticipate when they will be available. One of the problems faced by the client I mentioned was that via the convoluted mess they had gotten themselves into, JQueryUI was loading before JQuery, which doesn’t work.
  4. There must be a simple way for non-developers to participate in maintaining things. Otherwise we run into the problem above where users had loaded many different versions of JQuery via the Script Editor Web Part
  5. It would be nice if this made things easier for developers – perhaps by supporting AMD modules and requirements management, either via Require.JS directly or a fork of it to make it SharePoint and MDS compatible.
  6. It should support both on-premises and SharePoint Online/Office 365, including Apps.

There are a few other things that should probably be included in there, but these, to me, are the big targets. I realize that there are answers to each of these individually, but what is missing is a comprehensive approach and a commitment from the community to drive a consistent approach. Honestly, there may not be one ring to rule them all. The best thing may be two or three approaches, but that would still be better, especially if there were some consistency and commonality amongst them.

If anything is to come of this, education and community support are critical to making it work, so what do you think? Leave a reply and we’ll see where this goes.

Dave

Partial Classes to the Rescue!

I’m working with a fairly large codebase – a few of my base classes have several thousand lines of code.  Not huge, but certainly unwieldy when you need to be hopping around the codebase.  There’s just no great way to organize everything.  While some folks eschew regions, I find them useful when used sparingly and what I’ll call “correctly”.  In other words, regions never go inside a method, that can quickly lead to a mess; but they’re useful to group a bunch of related members and be able to quickly hide them.  In some cases, those members can and should be moved to a separate class, and in those cases that’s what I do.  But in some cases, a separate class doesn’t make sense.  Regions do.

But even that isn’t enough sometimes.  In my current codebase, I have a bunch of interfaces which my various classes implement.  Each interface defines 5-10 members, so they’re pretty tiny, but even that can get unwieldy.  My core base class implements 5 of those interfaces, which means that it has something around 40 methods just to implement its interfaces – that’s not getting into some of the core logic for that abstract base class, just its interface implementations.  It also has a bunch of private methods and properties which it needs to do its job.  Wading through all of that code can be a headache sometimes.  Worse, I’m still a little old fashioned and sometimes I want to (gasp) *print* some of my code to be able to analyze it and draw circles and arrows while working on it.  Regions don’t help when printing – everything collapsed inside the region gets printed.

That’s where partial classes come in. 

I’ve started breaking my larger classes into partial classes – one file for each interface.  Now everything related to ISecurable or IStorable, for example, is on one place.  Each code file is now closer to 7-800 lines of code instead of 3-4000.  Much easier to work with. 

The best part is, neither Visual Studio nor the compiler care.  The compiler happily combines the separate files together into the exact same MSIL as if they had been all in one big file to begin with.  Visual Studio doesn’t miss a beat, either.  Not only does it have no trouble with the separate files, it still shows me proper Intellisense across files:

SNAGHTML441cfcd5

Furthermore, Visual Studio is smart enough to show me all of the members of my class, even those defined in another file from the one I’m currently working in, and it even shows them dimmed so I know they’re in another file:

image

Other Visual Studio functions, such as Peek Definition, Go to Definition, Find All References, etc., continue to work as well.

Typically partial classes are used to separate generated code from developer-written code.  Other folks use them to allow multiple developers to work on the same class at the same time.  This is just one more case where partial classes provide real value.  I know some folks will disagree with this, and that’s fine.  To each their own.  To my mind, partial classes, like regions, provide significant value when used appropriately and in moderation.

One final point, I organize my partial classes a little differently than I think other folks might, and I think this helps keep things straight.  Every partial class in my code base is in a folder named for the class.  Each file of the partial class has the same base part of its name – the name of the class, but then it also includes the interface implemented in that file.  Here’s a view of Solution Explorer showing one of my partial classes (RepositoryItem):

image

Each interface gets its own file, and then there’s RepositoryItem-Core, which contains everything else for the class.