The Halloween Bounty!

Ever wonder what the distribution of candy is these days?  Here's the breakdown…

Generic Lolipops – 53 
Snickers – 52
Twizzlers – 50
Milkyway – 47
Butterfinger – 37
3-muskateers – 37
Baby Ruth – 32
Laffy Taffy – 30
Jolly Ranchers – 28
M&M's – 17
Toothbrushes – 17
Swedish Fish – 14
Crunch bars – 14
Lemonheads – 10
Starbursts – 10
Kitkats – 10
Twix – 7
Skittles – 6
Reeses – 5
Gobbstoppers – 5
Sour patch – 4
Almond joy -1
Misc(s) – 1 – everything else we had one of…

This is the stuff marketing people love to know…

Missing Features from SharePoint Designer 2010

SharePoint Designer 2010 is missing a pretty important feature.  The ability to rollup data from child sites and rolldown information from Parent sites.

Here's what we use to be able to do in SPD2007:

1) You could open the Data Source library by clicking "Data View->Manage Data Sources" and then you could then add a new data source library by clicking the button at the bottom:

2) This link would allow you to type a url to another site:

3) This would then allow you to see the other data source library's data sources

4)  You could then use the "Linked data source" wizard to select from the current and newly added data source library

 

The ability to do this is missing in SPD2010, you will see no link that allows you to add these external libraries:

In software design, it is a sin to remove features that your users have grown accustom too.  The only work around that
I have found it to try to utilize the REST or SOAP data source to connect to the other libraries:

This of course means that you will need to setup the authentication:

Only problem, every time I have tried to do this, Designer 2010 gives an error, which basically leaves us without this
valuable feature we had in 2007.

Marc has posted a workaround to create the manual code that uses data sources in seperate sites

SharePoint Health Analyzer Jobs

SharePoint has the ability to heal itself.  Pretty cool concept invented by the guys at IBM a long time ago and finally being
worked into Microsoft products.  In central administration you will find the 'monitoring' page has some pretty neat things on it:

One of the coolest is the Health Analyzer Rules. By clicking on Review Rule Definitions, we will see several of these:

I have explored several of these jobs and being it is RTM, not all of them are working exactly like they were intended to.  One
example is the 'One or more categories are configured with Verbose trace logging.' rule.  This rule is designed to check
if anyone has set the logging setting to 'verbose'.  If they have, it can automatically fix this condition.

Out of the box, we can see the settings are 'Information' for Event Level and 'Medium' for trace level:

As an unexperieced SharePoint admin, you may end up clicking the "All categories" checkbox and then setting the values to
their highest level 'Verbose":

 

This is bad as it will generate VERY large log files in a production environment.  We are talking
gigabyptes/minute.  This is very bad for a virtualized image as the image file will grow very large.  Then try backing it
up by copying it…not fun copying a several 100's gigabyte VM file around.

Luckily, the health rule will watch for this condition and when it finds it will give us the nice red or yellow bar at the top of the Central
Administration site.  We will also see that the condition has been noted in a rule status list.  Clicking on the item, we will
get a definition of what is misconfigured, note the ability to "Repair Automatically":

Unfortunately, the logging health analyzer job needs an update.  It is suppose to reset the levels back to the default settings.  It does this for the
Trace level setting, but it doesn't touch the event level settings:

These still remain at 'Verbose' after the job runs.  The job should also set these
back to 'Information' as per the out of the box settings.

Dead Beat Training Centers – The Disgraceful List

Ok, so I have a potential dead beat training center that I'm going to embarass pretty heavily if they don't pay up.  I will be posting their name here on our "Dead beat Training Centers" list.  If you are an MCT or a training broker and have some outstanding payments, this will be a potential outlet to let the community know about them.  Email me your deadbeat center and what they have or have not done and I will post it here (please note that this will NOT be anonymous).

 MCTs' beware the following training centers:

Chris

SharePoint Saturday San Diego – Call for speakers

Want to come to sunny, amazing, awesome San Diego in Feburary (2/26/2010) and meet fellow SharePoint enthusiats?  Join us as a speaker for San Diego's first SharePoint Saturday:

SPS Site:
http://www.sharepointsaturday.org/sd/default.aspx

Register Here:
http://spssan.eventbrite.com

Call for speakers:
Send an email to chris@sanspug.org with:

o             Name:

o             Title:

o             Company:

o             Email address (for Connections business):

o             Email address (for attendees, if different):

o             Mobile phone:

o             Web/Blog URLs (if applicable):

o             Twitter handle  (if applicable):

o             Bio: <A current bio is required>

o             Headshot: Attach a printable headshot (no 72dpi web-sized jpgs).

 

For each submitted topic, use the following template:

 

o             Title: <a sexy, marketable title that also clearly indicates the topic of the session>

o             Content Focus: <IT PRO, Dev, Bus User, Governance, 3rd party tools, etc>

o             Abstract: <abstract>

We will also require that you submit your PPTs before you are confirmed to be a speaker ( you will have till January 15th to submit the PPTs).

This event is hosted by San Diego SharePoint Users Group (www.sanspug.org)

Chris

Microsoft is looking for a few good projects!

Through a local friend who's company that is the leading Connected Systems content provider for Microsoft, we are partnering with them and Microsoft to engage on projects to develop solutions built using Windows Server AppFabric, Workflow Foundation and Windows Communication Foundation. We will be co-sponsoring software development projects that meet business and technical criteria elaborated below.  If you are interested, feel free to email me (chris@architectingconnectedsystems.com) or DM me on Twitter (@givenscj).

Qualifying “Business Criteria”

·        Signed PR Release Form

·        Premier Support

·        MCS and/or Partner Engaged

·        Current Release of Product

 

Qualifying “Technical Criteria”

·        Hosting:  Throughput: WF/WCF > 100 tx/second

·        Integrated Platform: AppFabric, WIF, SharePoint Server or BizTalk integrated solution

·        Management: PowerShell API | SCOM

·        Monitoring:  >100M tracked events (15 events/instance @ 50 calls/sec, 8-hour day, over 5 days)              

·        Persistence:  >50K persisted active instances | >10 persist points/workflow

·        Tier1/Mission Critical:  Cache HA feature and WF/WCF mission critical application

 

 ·         Projects that have case study potential due to the client’s name.

o   Usually for companies that have strong name recommendation or a large in size, but exceptions often occur and smaller companies can still participate, just at a smaller dollar amount.

o   Example: a Fortune 1000 corporation embarking on a new project using Workflow Services or WCF Services hosted on Windows Server AppFabric.

·         Projects that are technically interesting.

o   The project may be developing items which are of interest to the broader Microsoft ISV community, and the work product can be generalized (e.g., removing the client’s IP) for sharing. For example, getting AppFabric to use Oracle for persistence and monitoring.

o   The project may well provide lots of feedback on less used aspects of AppFabric, WF and WCF that can be a source of bug reports to MS and guidance to the ISV community. For example, building a Workflow Services design environment completely external to Visual Studio.

o &n
bsp;
The project itself does not have to be large in scope- it may be just a proof of concept effort. In effect, Microsoft will “seed” the development’s success.

CJG – SharePoint 2010 Speaking Engagements

I'm proud to announce that I am getting out of the course ware development cage and that I have a couple of speaking engagements in the next couple of months.  The first is #SPSLA (SharePoint Saturday LA) this Saturday Sept 18th. I'll be speaking on:

  • Service application architecture and building custom service applications (if you are lucky you might get to see my new service application that does Social Computing Filtering!)

 I will also be speaking at SharePoint Palooza in Seattle, WA on Oct 15th, 2010.  I'll be speaking on:

  • PowerPivot:  Learn what PowerPivot is and how to use it and
    how to integrate it with SharePoint 2010.  In this session you will learn
    what many of the DAX expressions are used for, common DAX patterns and how to
    setup your PowerPivot for SharePoint integration.  Chris will also show
    you how to setup automatic data refresh and discuss some of the gotchas of
    integration.
  • Developing around SharePoint
    2010 Social Computing
    :  Learn how
    the new social computing features really work in SharePoint 2010 and what the
    common object model classes are to utilize these new features in your own
    code.  Topics will include the tagging, metadata terms and the new
    activity stream.  Chris will also demonstrate a technic for filtering
    tagging for governance purposes.
  • Developing with SharePoint
    Search and SharePoint FAST
    :  Learn
    how to bypass the Central Administration and write your own Search management
    tool.  In this session we will explore the Search Object Model, the query
    interfaces and some of the more uncommon extension points of SharePoint Search
    like iFilters and Protocol handlers.  We will also explore the various
    areas in which you can extend FAST Search.

Hope you can join me for these two great events!
Chris

 

Content Type Publishing – Rare Answers to Common Questions

At our SanSpug.org meeting for August I had some very intelligent questions asked of me.  Ones I didn't know the answer too, until now!  Here's the questions:

  • What happens to a list that has been assigned a content type that was content published when we change and republish the content type.  Is it business as usual in that the content type will update across the lists?
  • What happens to a list that has been assigned a content type and we move that list to another site? 

Very interesting questions, and my default answer would be to say that publishing is nothing more than automating the "Creation" of content types, and therefore, all previous rules will apply here.  Let's take a look at the basics.

Basic #1:  Does a list get updates of a content type?

  1. Create a new content type
  2. Assign it to a list
  3. Create a new item based on the content type
  4. Update the content type
  5. Review if the content type gets the updated columns/settings

ANSWER:  A list will only get the items updated if you set the "Update all content types inheriting from this type?" radio button to TRUE!

Basic #2:  Does an exported list to a new site collection keep the content type?

  1. Create a new content type
  2. Assign it to a list
  3. Create a new item based on the content type
  4. Export the list with contents as a template (.stp)
  5. Restore to a new site collection
  6. Review if the content type is created/attached

 ANSWER: Yes!  The content type will in fact move to the new site collection and be assigned to the newly created list!

 CTHub #1:  Does a list get updates of a content type via a Managed Content Hub?

  1. Create a new Managed Hub
  2. Configure the MMS to point to the hub
  3. Create a new content type in the hub
  4. Publish the content type
  5. Wait for the content type to be published
  6. Assign it to a list
  7. Create a new item based on the content type
  8. Update the content type
  9. Wait for the publish
  10. Review if the content type gets the updated columns/settings

ANSWER: 
Yes!  This implies that the "Update all
content types inheriting from this type?" radio button to TRUE!

CTHub #2:  What happens if you create a list template and move the list to another site collection, then update the Content Type in the hub?

ANSWER:  The newly created list WILL get the updates from the hub as long as the site collection is located inside the same web application.

 CTHub #3:  What happens when you publish a content type with the same name as one already existing?

ANSWER:  The content type will not be updated, the Content Type Publishing logs will display an "Exists" error.

Enjoy!
Chris

SharePoint Logging Database Exposed

So what exactly does this thing do anyway other than keep growing HUGE?!?  Let's take a look!  A little background…I have been running SharePoint 2010 RTM since it was released and since then the Logging database has grown to 500MB.  The database itself seems to be dynamic in that you will start off with a small number of tables, but as your increase your feature usage, more tables will be added.

Question #1:  Just how many tables does your system have?  Here's my current list (but you may have even more!):

  • AnalysisServicesConnections
  • AnalysisServicesLoads
  • AnalysisServicesRequests
  • AnalysisServicesUnloads
  • Configuration
  • ExportUsage
  • FeatureUsage
  • ImportUsage
  • MonthlyPartitions
  • NTEventLog
  • PerformanceCounters
  • RequestUsage
  • Search_CrawlDocumentStats
  • Search_CrawlProgress
  • Search_CrawlWorkerTimings
  • Search_PerMinuteFTQueryLatency
  • Search_PerMinuteTotalOMQueryLatency
  • Search_PerMinuteTotalQueryLatency
  • Search_PerMinuteTotalUIQueryLatency
  • Search_QueryErrors
  • Search_VerboseFTQueryLatency
  • Search_VerboseOMQueryLatency
  • Search_VerboseQueryProcessorLatency
  • Search_VerboseUIQueryLatency
  • Search_VerboseWebPartQueryLatency
  • SiteInventory
  • SQLDMVQueries
  • SQLMemoryQueries
  • TimerJobUsage
  • TraceDiagnosticsDummy
  • ULSTraceLog
  • Versions

All of those of course have their partitions to them.

Question #2: What the hell is a partition and why do I have 32 of them?

You will see that there are a max number of 31 partitions.  But you will also notice that there is a partition number 0.  That is 32 days of partitions?  I think this is a bug in the stored procedures when the table partitions get created.  But the idea is to keep the set of daily information broken apart via a rolling schedule (this means that day 9 may not map to the calendar day 9).  You can look at the Configuration table to find what the current partition is.  As the Timer Job is run to process the log data, it will move to the next partition and recycle as it goes.

Question #3: How do I turn the freakin thing off?

This can be accomplished by going into Central Administration->Monitoring->Configure usage and health data collection.  Uncheck the "Enable usage data collection" checkbox and the "Enable health data collection" checkbox

Question #4:  I like the data, but there is too much, how do I shrink it?

If you explore the Stored procedures in the database, you will see that it simply truncates when it moves to a new partition.  You can do the same by looping through all the tables and truncating the tables.   You can also just run the function called "fn_TruncateUnusedPartitionsHelper"

Question #5:  I like the data that is generated, can I please have some more?

Hell yeah you can, if you explore the timer jobs titled "Diagnostic Data Provider*" you will see they are disabled!  Turn them on and you will get even more data around:

  • Event Log
  • Performance Counters – Database servers
  • Performance Counters – Web Front Ends
  • SQL Blocking Queries
  • SQL DMV
  • SQL Memory DMV
  • Trace Log

Ayman has a nice post on how you can use .NET Reflector to get in and see how these monsters do there dirty work here

Question #6:  Can you have too much data?

Oh yeah!  You can easily hit the 4GB limit on this database, for those of you running SQL Express (why?), you will be truncating tables quite frequently!  For those of you with limited disk space, get ready to also be truncating tables.  Those of you with HUGE disk arrays, well…you should be fine.

Question #7:  Where is this documented schema thing I have seen so many marketing slides about?

Who the heck knows!  It seems that anyone and any application is able to write to this thing.  Even you could write into it with your own application.  I'm not sure how you get a new set of partition tables setup and the attributes created, but eventually I will find some time to build something for it 🙂

Enjoy!
Chris