jump to navigation

Data Optimization Using Data Request Objects Implementing IEquatable December 23, 2013

Posted by codinglifestyle in ASP.NET, C#, CodeProject, Architecture.
Tags: , , , , ,
add a comment

Anyone who has written enterprise software often knows an ideal design and a linear code path are not too common. Even if you are the architect of your application that doesn’t save you from the hoops you must jump through to connect and interact with other systems in your enterprise’s ecosphere. Such as it is in my system to do a seemingly simple task, loading addresses. While there is only one address control and one presenter, when we get to the data layer there are many different code paths depending on the type of address, the selected company, sales areas, and backend systems. For a large order with hundreds of quotes there is theoretically several hundred address controls to be populated efficiently throughout the ordering process.

Before we get going on optimization let’s start with some basics. We have data consumers who want data. These consumers may represent my address control, a table, or any number of components that need data. In addition, there may be hundreds or thousands of them. We can’t allow each instance to simply call our data layer individually or we’ll cripple the system. These data requests need to be managed and optimized.

We are going to encapsulate our data request in an object. If you’re data layer function signature takes 3 parameters simply move these to your new data request object and later we’ll rewrite your data function to take a List<DataRequest> instead of 3 parameters. Of course you may have many parameters or complex objects you need to pass, all the better to encapsulate them! So now we have an object which contains all the information we need to ultimately call the data layer.

When you have hundreds or thousands of data requests there is a very good chance that many of those requests are for the same data. That’s what we’re after here is minimizing the number of calls for actual data. Of course, due to how legacy data functions may be written they may be too narrow in scope. Some queries, for example, may be filtered based on a function parameter which then might require multiple calls to get the complete data required across all data requests. This is the kind of analysis you will need to perform on your own to perhaps bring back the larger data set, cache it, and return pieces of it to individual data requests. One of the great advantages of encapsulating your data requests is analysing them and being able to better satisfy them by rewriting your data layer functions.

Next we must design our controls and other data consumers to be patient. Instead of making a call to get some data which is immediately fulfilled they will instead register a data request. This will give the hundreds or thousands of other data consumers a chance to register their data requests.

Once the registration window is closed we can now trigger our service or presenter to make the necessary data layer call. As eluded to above, we will pass the complete list of data requests to the data layer. We will then have the opportunity to optimize the data requests to minimize the number of actual data calls made and to make them in bulk. This is the part where you might start worrying how to tackle this gargantuan task. What if I told you I could optimize your data requests in just a few lines of code?

//Create data sets of like requests  (minimized data requests)
//
Dictionary<DataLoadAddressesEntity, List> requestsEx = new Dictionary<DataLoadAddressesEntity, List>();
foreach (DataLoadAddressesEntity request in requests)
{
    if (!requestsEx.ContainsKey(request))
        requestsEx.Add(request, new List());
    else
        requestsEx[request].Add(request);
}

That wasn’t so hard, was it? Now I have a dictionary whose keys represent the minimized number of data calls truly necessary. I call these prime data requests and they are the keys in the dictionary. Each prime data request may then be used to populate the list of equal data requests which are held in the values of the dictionary. So once the prime data request is satisfied we merely need to copy the results across the values in the data set:

//////////////////////////////////////////////////////
//Copy prime request results reference across data set
//
requestsEx[requestPrime].ForEach(r => r.Results = requestPrime.Results);

You might notice that I’ve included a Results property in my data request object. The great thing about encapsulating our request in an object is how handy it is to add more properties to keep everything together. Keep in mind that we are merely copying a reference of the prime request’s results across all like data requests. Therefore, changing one affects all the others, which makes sense but must be understood to not be dangerous. Some developers can go many years without really considering what reference types are so make sure to mentor your team of the basics of value vs reference types. Coming from C++ and the wondrous pointer I take full advantage of references as you will see in my final summary below.

So you must be wondering what voodoo magic I’m using to optimize the data set so easily. Did you read the title? To know if one data request is equal to another it is up to us to implement IEquatable<DataRequest> and override GetHashCode. This is the voodoo that allows us to use Dictionary.ContainKey(datarequest) singling out a prime data request from the secondary data requests. So, how do we decide if one request is equal to another?

With so many permutations and variables in the data layer where does one start? There is no easy answer for this one. It is time for some analysis to boil down what exactly makes one data request different from another. This is the hardest part of the exercise. I started with a spreadsheet, looked at all the variables each code path required, and developed a matrix. I was able to eliminate many of the variables which were the same no matter what type of request it was (CompanyID for example). What appeared an arduous task boiled down to just a few criteria to differentiate requests from one another. Of course, it took hours of eliminating unused variables, proving assumptions that other variables were always equal, and cleaning up the code in order to see the light through the reeds.

Once your analysis is done you now know how to tell if one data request is equal to another so we don’t waste resources making the same call twice. Implementing IEquatable<DataRequest> will have you implementing Equals in your data request object where the comparing type is another data request:

public bool Equals(DataLoadAddressesEntity other)

For each criteria from your analysis, let’s assume we have a property in your data request object. For each criteria, a comparison of of this.Property != other.Property means you return false. If the other data request’s criteria are the same you are both after the same data. So if you fall through all the criteria comparisons return true and you now have one less data call to make.

You must repeat the same logic, in principle, for the GetHashCode override. Instead of comparing the search criteria, this time you are adding up the criteria’s hash codes. So much like above, if you have 2 data requests which need the same data you should also have 2 hash codes which are equal. In this way you can use the dictionary, as above, to optimize the data requests.

Although the criteria that pertains to your data requests will differ I will show mine here as I love seeing examples:

#region IEquatable Members
public bool Equals(PartnerFunctionSearchEntity other)
{
    if (!this.AddressType.Equals(other.AddressType)) return false;
    if (!this.SoldToId.Equals(other.SoldToId))       return false;
    if (!this.SalesArea.Equals(other.SalesArea))     return false;

    return SearchCriteria.DictionaryEqual(other.SearchCriteria);
}

public override int GetHashCode()
{
    unchecked  //overflow is ok, just wrap
    {
        int hash        = 17;
        const int prime = 31;  //Prime numbers

        hash = hash * prime + AddressType.ToString().GetHashCode();
        if (!string.IsNullOrEmpty(SalesArea))
            hash = hash * prime + SalesArea.GetHashCode();
        if (!string.IsNullOrEmpty(SoldToId))
            hash = hash * prime + SoldToId.GetHashCode();

        foreach (KeyValuePair<EAddressSearchCriteria, string> keyvalue in SearchCriteria)
            hash = hash * prime + keyvalue.GetHashCode();

        return hash;
    }
}
#endregion

You may be wondering where the best place to put the various parts of this solution. I would suggest a service layer which sits between the data consumers and the data layer. In my case with many instances of an address control I placed it in the control’s presenter. As there is a 1:1 relationship between control and presenter the latter contains a member variable which is the data request. On registration it contains only the criteria necessary to get the data. I am using the per request cache (HttpContext.Current.Items) to store my List<DataRequest> where all registered data requests are accumulating.

Remember, my presenter only holds a reference to it’s _Request member variable… the same reference which is in the data request queue and the same reference to which the results will be assigned.

Once registration closes the data layer call is triggered with the list of data requests. The optimization happens here, nearest the source, so as not be repeated. Once the requests are optimized and the actual data calls are made the _Request.Results still held in the presenter’s member variable will be populated and are ready to set to the view for display.

Passing Anonymous Types with Dynamic Lists October 1, 2013

Posted by codinglifestyle in C#, CodeProject, Javascript.
Tags: , ,
1 comment so far

Recently I was rewriting a function with a large method signature which took several arrays as parameters.  As you might guess, the index across these arrays was assumed to be the same which is error-prone.  I thought we should really encapsulate all this information in to a class or struct and pass in a single list instead.  Then I stopped myself as there was no use for this class beyond this one-time call to another function.

Then I thought, I know, I could use an anonymous type instead.

var datum = new { a = myData.a, b = myData.b, /* c, d, ..., */ z = myData.z };

This seems like a reasonable approach and exactly what throw-away anonymous types are for.  Then I tried to add this to a list.  Hmm, with a little creativity I was able to overcome this… except it only worked within the scope of the same function.  Well, I could have passed a list of objects, but what would I cast them to?

I really thought I’d have to cave in and just create the new class but then there was one option I hadn’t considered: dynamic.  Like JavaScript, this frees you from the restriction of static typing which was inhibiting a simple notion to pass a list of anonymous types to another function.  Now our list definition looks like this:

var data = new List<dynamic>();

Now this means I could really add in anything and its evaluation will be resolved at runtime.  So we’re finally free to use our anonymous class however we want.  We could even code access to properties which we aren’t supplying and code will compile (however you’ll get a nasty surprise at runtime).

protected void Bob()
{
    List<dynamic> data = new List<dynamic>();

    //
    //Assume lots of processing to build up all these arguments
    //
    data.Add(new { a = 1, b = "test" });

    Fred(data);
}

private void Fred(List<dynamic> data)
{
    //Fred processing logic
    foreach (var datum in data)
    {
        Trace.WriteLine(String.Format("{0}: {1}", datum.a, datum.b));
    }
}

The more evolved my client coding becomes the more restraining statically typed languages can feel.  With the dynamic keyword C# is one step ahead of me allowing a future of amazing code determined by an unlimited number of factors at runtime.

Copy and Paste Formatting with Visual Studio’s Dark Theme May 17, 2013

Posted by codinglifestyle in CodeProject, Visual Studio, Visual Studio 2012.
Tags: , , , , , ,
7 comments

I recently upgraded to VS2012 and, like most of you, was aghast at the default theme. Sure, after installing the updates the blue theme was good. But before updating I tried the dark theme… and I liked it! As our phones and tablets often use dark themes and websites are being remade to look like tablet apps the dark theme had a modern look to it and is easy on the eyes.

Look, it doesn’t matter what I think of the dark theme. I simply want to discuss an undesirable side effect affecting Copy & Paste (don’t forget Cut too). When copying code to an email, Word, or your IM window you realize the dark theme has a dark side:

What we get in the first paste attempt is a WYSIWYG copy of the formatting from Visual Studio. Clearly what most of us want is the second paste attempt. And the formatting issue can get even worse. When pasting to some programs you get white text on a white background. The problem is so obvious it seems the people who made the dark theme don’t actually use it day to day.

I was finding that I needed to: open options, select the blue theme, copy my code to the clipboard, open options again, and reselect the dark theme.

What a tedious workaround! So I went on a quest this morning and am happy to report I have a solution!

  1. Open Tools → Extensions and Updates
  2. Select Online (Visual Studio Gallery) and search for Productivity Power Tools 2012
  3. Download and restart Visual Studio when prompted
  4. Open Tools → Options
  5. Expand Productivity Power Tools and select HTML Copy

  1. Change the BeforeCodeSnippet option to:
    1. <style type=”text/css”>.identifier {color:black !important;}</style><pre style=”{font-family}{font-size}{font-weight}{font-style}”>
  2. Change EmitSpanClass to:
    1. True
  3. Check EmitSpanStyle is:
    1. True

You may want to optionally turn off all other features other than HTML Copy from the “All Extensions” menu.

Let’s take a look at what this feature is doing. When you copy text to the clipboard you can have multiple data formats such as Text, RTF, and HTML. When we’re pasting our code in to Word or an email it will typically use the HTML format (configuration dependent). Here is what we actually see in the clipboard when we do a copy after configuring VS as above.

</pre>
<!--StartFragment-->
<style type="text/css">
 .identifier {
 color: black !important;
 }
</style>
<pre style="font-family: Consolas; font-size: 13;"><span class="keyword" style="color:#569cd6;">var</span>&nbsp;<span class="identifier" style="color:white;">bob</span>&nbsp;<span class="operator" style="color:#b4b4b4;">=</span>&nbsp;<span class="keyword" style="color:#569cd6;">string</span><span class="operator" style="color:#b4b4b4;">.</span><span class="identifier" style="color:white;">Empty</span>;
</pre>
<!--EndFragment-->

Note that we’ve removed the background colour which means all of our identifier text is being set as white on a white background. However the style element is overriding the identifier color and setting it to black. This gives us the desired results:

var bob = string.Empty;

SoapExtensions: A Bad Day with HTTP 400 Bad Requests December 5, 2012

Posted by codinglifestyle in ASP.NET, CodeProject, IIS.
Tags: , , , , , , ,
1 comment so far

You may have found this post if you were searching for:

  • HTTP 400 Bad Request web service
  • Response is not well-formed XML web service
  • System.Xml.XmlException: Root element is missing web service
  • SoapExtension impacting all web services

Yesterday I was debugging an inconsistent issue in production. Thankfully we could track trending recurring errors and began to piece together all incoming and outgoing webservices were being negatively impacted for unknown reasons. This was creating a lot of pressure as backlogs of incoming calls were returning HTTP 400 Bad Request errors. Outgoing calls were silently failing without a facility to retrigger the calls later creating manual work.

We suspected SSO or SSL leading us to change settings in IIS. Being IIS 7.5 this touched the web.config which recycles the app pool. Every time a setting in IIS was changed or an iisreset was issued it seemed to rectify the situation. But after an indeterminate amount of time the problems would resurface.

The culprit ended up being a SoapExtension. The SoapExtension modifies the soap header for authentication when making outgoing calls to a java webservice.

<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<SOAP-ENV:Header>
<h:BasicAuth xmlns:h="http://soap-authentication.org/basic/2001/10/"
SOAP-ENV:mustUnderstand="1">
<Name>admin</Name>
<Password>broccoli</Password>
</h:BasicAuth>
</SOAP-ENV:Header>
<SOAP-ENV:Body>
<m:echoString xmlns:m="http://soapinterop.org/">
<inputString>This is a test.</inputString>
</m:echoString>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>

It does this with a dynamically loaded (that bit was my fault and made it a complete bitch to debug) SoapExtension taken from a legacy command line util which did this every 5 minutes:

Vendavo Sequence Diagram

This existed simply because nobody could figure out how to call the webservice directly within the web application.  Once incorporated when the web service was called, perhaps hours from an iisreset, the SoapExtension is dynamically loaded.  The bug was, even though it was coded to not affect anything but Vendavo, the checks performed were performed too late and therefore all web services, incoming and outgoing, were impacted.

SoapExtension Lifecycle

Previously the check was in the After Serialize message handler.  The fix was to return the original stream in the ChainStream.  The hard part was knowing what webservice was making the call before ChainStream was called. The check was moved to:

public overrides void Initialize(Object initializer)

The initializer object was tested setting a flag used in ChainStream to determine which stream was returned.

So lesson learned, beware SoapExtensions may impact all soap calls.  While you can specify a custom attribute to limit the extension to web methods you publish you cannot use this filtering mechanism on webservices you consume.  This  means you must self-filter or risk affecting all incoming and outgoing web services unintentionally.

Also, dynamically loading a setting which belongs in the web.config was a dumb idea which delayed identification of the problem. Now we use this:

<system.web>
   <webServices>
      <soapExtensionTypes>
         <add type="SBA.Data.PMMSoapExtension, SBA.Data" priority="1" group="High" />
      </soapExtensionTypes>
   </webServices>
</system.web>

Ref:
http://www.hanselman.com/blog/ASMXSoapExtensionToStripOutWhitespaceAndNewLines.aspx

http://msdn.microsoft.com/en-ie/magazine/cc164007(en-us).aspx

Software Architect Conference 2012 November 19, 2012

Posted by codinglifestyle in Architecture, ASP.NET, CodeProject, Parallelism.
Tags: , , , ,
add a comment

I was fortunate enough to have the opportunity to attend the Software Architect Conference this year in London.  This is the same group which puts on DevWeek.  It was short and sweet, just 2 days without the additional sessions before and after.  Often with the daily grind you simply don’t have the time or inclination to challenge yourself with the sort of material presented at these conferences.  This is what makes them unique, for a few precious days you are free of distractions to consider how and why we do what we do.  I certainly found it useful and some of the speakers where truly impressive.  While the technology we use continues to change at the speed of light, the great thing about software architecture is many of the basic principals of building a stable, well-engineered system haven’t changed since medieval times.

Keynote

  • Theme: 21st century architects should aspire to be like medieval “master builders”
    • 7 years apprentice, many years to master, administers the project, deals with client, but still a master mason
    • Keep coding – credibility with team, mitigates ivory tower
  • 20th century software architects
    • Stepped away from the code
    • UML
    • Analysis paralysis
    • Ivory Tower syndrome
  • Architecture traps
    • Enterprise Architecture Group – not sustainable, disconnected
    • CV driven development – ego and fun over needs and requirements
    • Going “Post-technical” – no longer involved in programming
  • Software Architecture summed up
    • Create a shared vision – get everyone to move in the same direction
  • Architectural lessons learnt lost in Agile – baby out with the bath water
    • It is a myth that there is a conflict between good software architecture and agile
  • What we do
    • Requirements and constraints
    • Evaluate and vet technology
    • Design software
    • Architectural evaluation
    • Code!
    • Maintainability
    • Technical ownership
    • Mentoring
  • True team leadership is collaborative / mentoring
  • Big picture: Just enough architecture to provide vision enough to move forward

Architectural Styles

  • Architectural definition defines 3 things
    • What are the structural elements of the system?
    • How are they related to each other?
    • What are the underlying principles and rationale to the previous 2 questions?
  • Procedural
    • Decompose a program into smaller pieces to help achieve modifiability.
    • Single threaded sequential execution
  • RPC Model
    • Still procedural: single thread of control
  • Threads
    • Decouples activities from main process but still procedural
    • Shared data must be immutable or copied
    • Some people, when confronted with a problem, think, “I know, I’ll use threads,” and then two they hav erpoblesms.
  • Event based, Implicit Invocation
    • The components are modules whose interfaces provide both a collection of procedures and a set of events
    • Extensible / free plumbing
    • Inversion of control (not dependency inversion)
  • Messaging
    • Asynchronous way to interact reliably
    • Instead of threads and shared memory use process independent code and message passing
  • Layers
    • Regardless of interactions and coupling between different parts of a system, there is a need to develop and evolve them independently
    • Each layer having a separate and distinct responsibility following a reasoned and clear separation of concerns
    • Often “partitioned” but not true layers due to cross references which sneak in
  • Alternate Layers – spherical
    • Core – domain model
    • Inner crust – services wrapped around core
    • Outer crust – wrapped external dependencies
  • Micro-kernel / Plug-in
    • Small hub with everything plugged in
    • Separates a minimal functional core from extended functionality and customer-specific parts
  • Shared repository
    • DB and the like
    • Procedures secondary, data is king!
    • Maintain all data in a central repository shared be all functional components of the data-driven application and let the availability, quality, and state of that data trigger and coordinate the control flow of the application logic.
  • Pipes & Filters
    • Divide the application’s task into several self-contained data processing steps and connect these steps to a data processing pipeline via intermediate data buffers.
    • Process & queue → process & queue → process & queue

The Architecture of an Asynchronous Application

  • Heavy focus on messaging throughout talk
  • About Messaging
    • Guaranteed delivery at a cost
    • Reliable and scalable
    • Subscription models
      • 1 : n
      • Round robin
      • Publish / Subscribe
  • Messaging Terms
    • Idempotency – will doing something twice change data / state?
    • Poison message – situation where a message keeps being redelivered (perhaps because an exception is thrown before an ack is returned to queue)
  • Messaging platforms
    • MSMQ – MS specific (personally found it easy enough to use)
    • IBM MQ
    • NServiceBus
    • RabbitMQ – multiplatform, Multilanguage binding. Mentioned in numerous talks and focus of talk.
    • SignalR – interesting client-side messaging platform could be a more powerful model than using web services on the client
      • install-package SignalR with NuGet
      • Picks best available connection method
      • Push from server to client
      • Broadcast to all or to a specific client

Async with C# 5

  • This talk is largely about Tasks and iterates through several examples of an application trying various asynchronous styles. The point is to try to get a minimal syntax such that an asynchronous application can be written is the same number of lines as a procedural program.
  • Context – must know the identity of which thread is executing. Critical in UIs and error handling
    • SynchronizationContext class can revert thread context to calling thread (as can several other methods such as Invoke)
  • Tasks – a piece of asynchronous functionality
    • Uses continuations to handle results
  • Async keyword – marks a function to allow use of the await keyword. Must return void or a Task.

    private async void CalculatePi()
    {
      // Create the task which runs asynchronously.
      Task<double> result = CalculatePiAsync();

      // Calls the method asynchronously.
      await result;
     
      // Display the result.
      textBox1.Text += result;

    }

  • Putting a try/catch around this an the compiler will ensure that the error is rethrown in the correct context.
  • Automatic use of thread pool which measures throughput to scale number of running threads up or down, as appropriate
  • Progress / Cancellation Features
    • IProgress<T>
  • Can launch a collection of classes and then use different operation types such as
    • var task = Task.WhenAny(tasks);
    • which returns when the first task completes. Or use Task.WhenAll to wait for all tasks.
  • WCF can generate the async methods to use tasks when adding Service References -> Advanced.

Inside Requirements

  • Kevlin Henny, author 97 Things Every Programmer Should Know and Pattern Oriented SW Arch
  • While listening to requirements we often stop listening while jumping ahead to solutions
  • Killer question when cutting through nefarious design agendas: “What problem does this solve?”
  • Patterns often misapplied – using a hammer to drive a screw leading to a pattern zoo
  • Composing a solution to a problem rather than analysis to understand the problem
  • Many to many relationships don’t need to be normalized (they model the real world)
  • Describing is not the same a prescribing
  • A model is an abstraction of a point of view for a purpose
    • Good – omits irrelevant detail
    • Bad – omits necessary detail
  • RM-ODP: reference model using viewpoints a way of looking at a system / environment

    • Enterprise – What does it do for the business?
    • Information – What does it need to know?
    • Computational – Decomposition into parts and responsibilities
    • Engineering – Relationship of parts
    • Technology – How will we build it?
  • Use Case
    • Use inverted pyramid style to place most important detail at the top. Move post-condition next to pre-condition. Sequence, containing detail about how you accomplish the steps in-between pre and post at bottom as only interest to implementers.
      • Intent
      • Pre-condition
      • Post-condition
      • Sequence – lots of juicy detail but actually least important from an architecture point of view
  • User Story
    • Traditional Connextra form
      • As a <role>,
      • I want <goal/desire>
      • So that <benefit>
        • As an Account Holder
        • I want to withdraw cash from an ATM
        • So that I can get money when the bank is closed
    • Dan North scenario form
      • Given <a context>
      • When <a particular event occurs>
      • Then <an outcome is expected>
        • Scenario 1: Account has sufficient funds
        • Given the account balance is \$100
        • And the card is valid
        • And the machine contains enough money
        • When the Account Holder requests \$20
        • Then the ATM should dispense \$20
        • And the account balance should be \$80
        • And the card should be returned
  • Problems with the Use Case / User Story approach
    • Observations are always made through a filter or world-view
    • Until told what to observe you don’t know what you’ll get. In that case, is it even relevant?
    • Use Case Diagrams neglect to notice they are fundamentally text/stories
  • Context Diagrams – shows the world and relationships around the system (UML actors)
    • Litmus test: what industry does the diagram apply to?
    • Not a technical decomposition
    • You’re an engineer planning to build a bridge across a river. So you visit the site. Standing on one bank of the river, you look at the surrounding land, and at the river traffic. You feel how exposed the place is, and how hard the wind is blowing and how fast the river is running. You look at the bank and wonder what faults a geological survey will show up in the rocky terrain. You picture to yourself the bridge that you are going to build. (Software Requirements & Specifications: “The Problem Context”)

    • An analyst trying to understand a software development problem must go through the same process as the bridge engineer. He starts by examining the various problem domains in the application domain. These domains form the context into which the planned Machine must fit. Then he imagines how the Machine will fit into this context. And then he constructs a context diagram showing his vision of the problem context with the Machine installed in it.
  • Problem Frame approach – describe a problem in diagrams
  • Grady Booch
    • Use centric – visualization and manipulation of objects in a domain
    • Datacentric – integrity persisting objects
    • Computational centric – focus on transforming objects
  • In summary: move from ignorance / assumptions → knowledge gathered from multiple points of view

A Team, A System, Some Legacy… and you

  • Legacy System – so valuable it can’t be turned off (and it’s paid for!)
  • Be aware a legacy system often comes with a legacy team engrained in their own methods
  • Being late to the party
    • Software architecture often seems valuable only once things have gone wrong.
    • Architects often join existing projects with to help improve difficult situations
    • Often a real sense of urgency to “improve”
    • Avoid distancing self to ivory tower and likewise avoid digging in thus losing big picture focus
  • Software architecture techniques offer a huge value for older or troubled projects. Especially techniques to understand where you are and with whom
  • Stage 1: Understand
    • Right perspective
      • See gathering requirements for perspectives of end user, business management, IT Managers, development, and support
    • Automated analysis tools
      • NDepend, Lattix, Stucture 101, Sonar
      • Dependency analysis
      • Metrics
    • Monitor / Measure
      • Leverage existing production metrics
        • IIS
        • Oracle Enterprise Manager
      • Implementation metrics
      • Stakeholder opinions
    • Architectural Assessment
      • Systems Quality Assessment
        • Context and stakeholder requirements
        • Functional and deployment views
        • Monitor and measure
        • Automated analysis
        • Assessment Patterns
          • ATAM – architectural trade off analysis method
          • LAAAM – Lightweight architectural assessment method- more practical
          • TARA – tiny architectural review approach (recommended)
    • Minimal Modelling
      • Define notation / terminology
      • Break up system to different viewpoints
        • Functional
        • Data
        • Code
        • Runtime
        • Deployment – systems / services
        • Ops – run, controlled, roll-back
      • Focus on essentials for target audience
    • Deliverable:
      • System context and requirements
      • Functionality and deployment views
      • Improve Analysis
      • Requirements Assessment
      • Identity and report
      • Conclusion for sponsor
      • Deliver findings and recommendations
  • Stage 2: Improve
    • Team must be involved or rocketing risk affecting morale, confidence, competence
    • Choices based on risk
      • Assess -> Prioritize -> Analyse -> Mitigate
    • Engage in Production
      • Why
        • Reality check
      • How
        • Monitoring, stats, and incidence management
      • Who
        • Biz man, IT man, support
    • Tame the Support Burden
      • Drain on development
      • Support team can offset this
      • Avoid “over the wall” mentality
    • Continuous Integration and Deployment
      • Start simple
      • Increased efficiency and reliability
    • Automated Testing
      • Unit test + coverage, regression tests
      • Costly
    • Safe step evolution
      • Control risk
      • Wrap with tests
      • Partition
      • Simplify
      • Improve
      • Generalize
      • Repeat
    • Stay coding – but if a pure architect stay off the critical path
      • Beware ROI of your coding skills vs. architect’s skills
      • Refactor, write unit tests, address warnings
  • Define the future
    • Good for the team
    • Clear, credible system architecture for the medium term (1-2 years)
    • Beware: timing and predictions

Technical Debt

  • As an evolving program is continually changed, its complexity (reflecting deteriorating structure) increases unless work is done to maintain or reduce it
  • Technical Debt is a metaphor developed by Ward Cunningham to help us think about the above statement and choices we make about the work required to maintain a system
  • Like a financial debt, the technical debt incurs interest payments, which comes in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into a better design
  • Sometimes, upon reflection, it is better to pay interest. But are we trapped paying so much interest we can never get ahead?
  • What is the language of debt?
    • Amortise, repayment, balance, write off, restructure, asset, interest, default, credit rating, liability, principal, load, runaway, loan, consolidation, spiralling, value
  • Shipping first time code is like going into debt. A little debt can speed delivery so long as it is paid back promptly with a rewrite
  • The danger is ignoring or not paying back the debt (compound interest!)
  • Rebuttal: A mess is not a technical debt. A mess is just a mess.
  • Counter response: The useful distinction isn’t between debt or non-debt, but between prudent and reckless debt.
  • There is also a difference between deliberate debt and inadvertent debt.

  • There is little excuse for introducing reckless debt
  • Awareness of technical debt is the responsibility of all roles
  • Consideration of debt must involve practice and process
  • Management of technical debt must account for business value

  • Perfection isn’t possible, but understanding the ideal is useful

Books, People, and Topics of Note                                       

  • Simon Brown – www.codingarchitecture.com
  • Alan Holub – www.holub.com
  • Kevlin Henney – Pattern Oriented Software Architecture
  • Grady Booch – architecture vs. design
  • Linda Rising
  • George Fairbanks – Just Enough Software Architecture
  • Roy Osherove – Notes to a Software Team Leader
  • Top 10 Traits of a Rockstar Software Developer
  • Becoming a Technical Leader – Gerald Weinberg
  • 101 Things I Learned in Architecture School
  • Architecting Enterprise Solutions
  • Software Architecture – Perspectives of an Emerging Discipline
  • Software Requirements and Specification – Michael Jackson
  • Problem Frames – Michael Jackson
  • 12 Essential Skills For SW Arch
  • Refactoring to Patterns
  • Managing Software Debt
  • Modernizing Legacy Systems
  • Working Effectively with Legacy Code
  • Growing Object-Oriented Software, Guided by Tests
  • Knockout.js – MVVM javascript library. Takes JSON and allows you to connect to HTML in a simple way I presume w/o the manual jQuery work of redrawing your control (e.g. autocomplete textbox)
  • Backbone.js – model / view extension with events
  • Parasoft Jtest smoke test
  • Selenium automation UI test
  • RabbitMQ – client side messaging queue
  • LightStreamer / SignalIR – web sockets for client (stop gap for HTML5?)

February 13, 2012

Posted by codinglifestyle in Uncategorized.
add a comment

codinglifestyle:

I like waffles too!

Originally posted on Making the Complex Simple:

I’ve noticed a rather interesting thing about best practices and trends in software development, they tend to oscillate from one extreme to another over time.

So many of the things that are currently trendy or considered “good” are things that a few years back were considered “bad” and even further back were “good.”

This cycle and rule seems to repeat over and over again and is prevalent in almost all areas of software development.

It has three dimensions

Don’t misunderstand my point though, we are advancing.  We really have to look at this from a 3 dimensional perspective.

Have you ever seen one of those toys where you rock side to side in order to go forward?

snakeboard

Software development is doing this same thing in many areas.  We keep going back and forth, yet we are going forward.

Let’s look at some examples and then I’ll tell you why this…

View original 749 more words

Dictionary Extensions: Define useful extensions to play safe January 18, 2012

Posted by codinglifestyle in C#, CodeProject.
Tags: , , , , , ,
add a comment
   if (searchCriteria.ContainsKey(key) &&
       !string.IsNullOrEmpty(searchCriteria[key]))
       searchTerm = searchCriteria[key];

Ever have a dictionary or similar data structure and your code has many repeated checks to pull the value when in reality you’d be happy with a default value like null or string.Empty? Well, consider the following extension to Dictionary:

    public static class DictionaryExtensions
    {
        public static TValue GetSafeValue<TKey, TValue>(this Dictionary<TKey, TValue> dictionary, TKey key)
        {
            TValue result = default(TValue);
            dictionary.TryGetValue(index, out result);
            return result;
        }
    }

Let’s you do:

    Dictionary bob = new Dictionary();
    string safe = bob.GetSafeValue(100);
    System.Diagnostics.Trace.WriteLine(safe);

where safe defaults to “” as it hasn’t been added. Stop! I know what you’re going to say and I thought of that too. You can control the default value as well:

    public static class DictionaryExtensions
    {
        /// <summary>
        /// Gets the safe value associated with the specified key.
        /// </summary>
        /// <typeparam name="TKey">The type of the key.</typeparam>
        /// <typeparam name="TValue">The type of the value.</typeparam>
        /// <param name="dictionary">The dictionary.</param>
        /// <param name="key">The key of the value to get.</param>
        public static TValue GetSafeValue<TKey, TValue>(this Dictionary<TKey, TValue> dictionary, TKey key)
        {
            return dictionary.GetSafeValue(key, default(TValue));
        }

        /// <summary>
        /// Gets the safe value associated with the specified key.
        /// </summary>
        /// <typeparam name="TKey">The type of the key.</typeparam>
        /// <typeparam name="TValue">The type of the value.</typeparam>
        /// <param name="dictionary">The dictionary.</param>
        /// <param name="key">The key of the value to get.</param>
        /// <param name="defaultValue">The default value.</param>
        public static TValue GetSafeValue<TKey, TValue>(this Dictionary<TKey, TValue> dictionary, TKey key, TValue defaultValue)
        {
            TValue result;
            if (key == null || !dictionary.TryGetValue(key, out result))
                result = defaultValue;
            return result;
        }
    }

Let’s you do:

   Dictionary bob = new Dictionary();
   string safe = bob.GetSafeValue(100, null);
   System.Diagnostics.Trace.WriteLine(safe);

where safe is null.

There’s obviously something wrong with me because I still think this stuff is cool.

I’m developing a nice little set of extensions at this point.  Often it seems like overkill to encapsulate handy functions like these in a class. I had started by deriving a class from Dictionary<TKey, TValue> but changed over to the above.

ScriptArguments: An easy way to programmatically pass arguments to script from codebehind January 13, 2012

Posted by codinglifestyle in AJAX, ASP.NET, C#, CodeProject, Javascript.
Tags: , , , , ,
1 comment so far

During my on-going adventures AJAXifying a crusty old business app I have been using a methodology by which most client events are setup in codebehind. The reason for this is I have easy access to my client ids, variables, and resources in codebehind. By constructing the script function calls at this stage, I can avoid messy and fragile in-line code. What I am endeavouring to do is remove all script from the markup itself. So instead of having MyPage.aspx with script mixed with markup I have MyPage.js and all functions here. Separate js files avoid fragile in-line code which only fails at runtime, can’t be refactored, and doesn’t play as nice with the debugger. Besides, separation of markup and script is good!

The downside to setting up all this script in the codebehind is it didn’t take long for the number of arguments to grow and become unruly. My script function signature looked like this:

function fnAddressChange(ddId, labelId, checkId, sameAsId, hidSelectId, hidSameAsId, onSelectEvent)

And in the codebehind I had this:

string selectArgs     = string.Format("'{0}', '{1}', '{2}', '{3}', '{4}', '{5}'", _DropDownAddress.ClientID, _LabelAddress.ClientID, _RowSameAs.ChildClientID, (SameAs && _SameAsAddress != null) ? _SameAsAddress.LabelControl.ClientID : "-1", _HiddenSelectedID.ClientID, _HiddenSameAs.ClientID);

string selectScript   = string.Format("fnAddressSelect({0}); ", selectArgs);
string changeScript   = string.Format("fnAddressChange({0}, '{1}'); ", selectArgs, OnClientSelect);

We can see selectArgs is getting out of control. Not only is it getting ridiculous to add more to it, the function signature in script is getting huge and the ordering is easier to mess up. So I came up with this solution:

ScriptArguments args = new ScriptArguments ();
args.Add("ddId", _DropDownAddress.ClientID);
args.Add("labelId", _LabelAddress.ClientID);
args.Add("checkId", _RowSameAs.ChildClientID);
args.Add("sameAsId", (SameAs && _SameAsAddress != null) ? _SameAsAddress.LabelControl.ClientID : "-1");
args.Add("hidSelectId", _HiddenSelectedID.ClientID);
args.Add("hidSameAsId", _HiddenSameAs.ClientID);

Not only is the codebehind cleaner but I don’t have to worry about string.Format or the order in which I add arguments in. The resulting script generated is:

args.ToString()
"{ ddId : 'ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__DropDownAddress', labelId : 'ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__LabelAddress', checkId : 'ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__RowSameAs_FormField_CheckBox', sameAsId : '-1', hidSelectId : 'ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__HiddenSelectedID', hidSameAsId : 'ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__HiddenSameAs' }"

This is a javascript Object with a property per key set to the corresponding value. So in script I only need to take in one argument, the argument object. I can then access every piece of information inserted in to ScriptArguments via the correct key:

function fnAddressIsReadOnly(args) {
     alert(args.ddId);
     alert(args.labelId);
}

Will alert me with “ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__DropDownAddress” and “ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__LabelAddress”.

The great thing is how simple this was to implement:

public class ScriptArguments : Dictionary<string, string>
{
    public override string ToString()
    {
        StringBuilder script = new StringBuilder("{ ");
        this.Keys.ToList().ForEach(key => script.AppendFormat("{0} : '{1}', ", key, this[key]));
        script.Remove(script.Length - 2, 2);
        script.Append(" }");
        return script.ToString();
    }
}

This simple class solves a simple problem. I hope you find it useful.

FindControl: Recursive DFS, BFS, and Leaf to Root Search with Pruning October 24, 2011

Posted by codinglifestyle in ASP.NET, C#, CodeProject, jQuery.
Tags: , , , , , , ,
add a comment

I have nefarious reason for posting this. It’s a prerequisite for another post I want to do on control mapping within javascript when you have one control which affects another and there’s no good spaghetti-less way to hook them together. But first, I need to talk about my nifty FindControl extensions. Whether you turn this in to an extension method or just place it in your page’s base class, you may find these handy.

We’ve all used FindControl and realized it’s a pretty lazy function that only searches its direct children and not the full control hierarchy. Let’s step back and consider what we’re searching before jumping to the code. What is the control hierarchy? It is a tree data structure whose root node is Page. The most common recursive FindControl extension starts at Page or a given parent node and performs a depth-first traversal over all the child nodes.

Depth-first search
Search order: a-b-d-h-e-i-j-c-f-k-g

/// <summary>
/// Recurse through the controls collection checking for the id
/// </summary>
/// <param name="control">The control we're checking</param>
/// <param name="id">The id to find</param>
/// <returns>The control, if found, or null</returns>
public static Control FindControlEx(this Control control, string id)
{
    //Check if this is the control we're looking for
    if (control.ID == id)
        return control;

    //Recurse through the child controls
    Control c = null;
    for (int i = 0; i < control.Controls.Count && c == null; i++)
        c = FindControlEx((Control)control.Controls[i], id);

    return c;
}

You will find many examples of the above code on the net. This is the “good enough” algorithm of choice. If you have ever wondered about it’s efficiency, read on. Close you’re eyes and picture the complexity of the seemingly innocent form… how every table begets rows begets cells begets the controls within the cell and so forth. Before long you realize there can be quite a complex control heirarchy, sometimes quite deep, even in a relatively simple page.

Now imagine a page with several top-level composite controls, some of them rendering deep control heirachies (like tables). As the designer of the page you have inside knowledge about the layout and structure of the controls contained within. Therefore, you can pick the best method of searching that data structure. Looking at the diagram above and imagine the b-branch was much more complex and deep. Now say what we’re trying to find is g. With depth-first you would have to search the entiretly of the b-branch before moving on to the c-branch and ultimately finding the control in g. For this scenario, a breadth-first search would make more sense as we won’t waste time searching a complex and potentially deep branch when we know the control is close to our starting point, the root.

Breadth-first search

Search order: a-b-c-d-e-f-g-h-i-j-k

/// <summary>
/// Finds the control via a breadth first search.
/// </summary>
/// <param name="control">The control we're checking</param>
/// <param name="id">The id to find</param>
/// <returns>If found, the control.  Otherwise null</returns>
public static Control FindControlBFS(this Control control, string id)
{
    Queue<Control> queue = new Queue<Control>();
    //Enqueue the root control            
    queue.Enqueue(control);

    while (queue.Count > 0)
    {
        //Dequeue the next control to test
        Control ctrl = queue.Dequeue();
        foreach (Control child in ctrl.Controls)
        {
            //Check if this is the control we're looking for
            if (child.ID == id)
                return child;
            //Place the child control on in the queue
            queue.Enqueue(child);
        }
    }

    return null;
}

Recently I had a scenario where I needed to link 2 controls together that coexisted in the ItemTemplate of a repeater. The controls existed in separate composite controls.

In this example I need to get _TexBoxPerformAction’s ClientID to enable/disable it via _ChechBoxEnable. Depending on the size of the data the repeater is bound to there may be hundreds of instances of the repeater’s ItemTemplate. How do I guarantee I get the right one? The above top-down FindControl algorithms would return he first match of _TextBoxPerformAction, not necessarily the right one. To solve this predicament, we need a bottom-up approach to find the control closest to us. By working our way up the control hierarchy we should be able to find the textbox within the same ItemTemplate instance guaranteeing we have the right one. The problem is, as we work our way up we will be repeatedly searching an increasingly large branch we’ve already seen. We need to prune the child branch we’ve already seen so we don’t search it over and over again as we work our way up.

To start we are in node 5 and need to get to node 1 to find our control. We recursively search node 5 which yields no results.

Next we look at node 5’s parent. We’ve already searched node 5, so we will prune it. Now recursively search node 4, which includes node 3, yielding no results.

Next we look at node 4’s parent. We have already searched node 4 and its children so we prune it.

Last we recursively search node 2, which includes node 1, yielding a result!

So here we can see that pruning saved us searching an entire branch repeatedly. And the best part is we only need to keep track of one id to prune.

/// <summary>
/// Finds the control from the leaf node to root node.
/// </summary>
/// <param name="ctrlSource">The control we're checking</param>
/// <param name="id">The id to find</param>
/// <returns>If found, the control.  Otherwise null</returns>
public static Control FindControlLeafToRoot(this Control ctrlSource, string id)
{
    Control ctrlParent = ctrlSource.Parent;
    Control ctrlTarget = null;
    string pruneId = null;

    while (ctrlParent != null &&
           ctrlTarget == null)
    {
        ctrlTarget = FindControl(ctrlParent, id, pruneId);
        pruneId = ctrlParent.ClientID;
        ctrlParent = ctrlParent.Parent;
    }
    return ctrlTarget;
}

/// <summary>
/// Recurse through the controls collection checking for the id
/// </summary>
/// <param name="control">The control we're checking</param>
/// <param name="id">The id to find</param>
/// <param name="pruneClientID">The client ID to prune from the search.</param>
/// <returns>If found, the control.  Otherwise null</returns>
public static Control FindControlEx(this Control control, string id, string pruneClientID)
{
    //Check if this is the control we're looking for
    if (control.ID == id)
        return control;

    //Recurse through the child controls
    Control c = null;
    for (int i = 0; i < control.Controls.Count && c == null; i++)
    {
        if (control.Controls[i].ClientID != pruneClientID)
            c = FindControlEx((Control)control.Controls[i], id, pruneClientID);
    }

    return c;
}

Now we have an efficient algorithm for searching leaf to root without wasting cycles searching the child branch we’ve come from. All this puts me in mind jQuery’s powerful selection capabilities. I’ve never dreamed up a reason for it yet, but searching for a collection of controls would be easy to implement and following jQuery’s lead we could extend the above to search for far more than just an ID.

Pass a Name Value Pair Collection to JavaScript August 8, 2011

Posted by codinglifestyle in ASP.NET, CodeProject, Javascript.
Tags: , ,
1 comment so far

In my crusade against in-line code I am endevouring to clean up the script hell in my current project. My javascript is littered these types of statements:

var hid = <%=hidSelectedItems.ClientId%>;
var msg = <%=GetResourceString('lblTooManyItems')%>;

Part of the cleanup is to minimize script on the page and instead use a separate .js file. This encourages me to write static functions which take in ids and resources as parameters, allows for easier script debugging, and removes all in-line code making maintenance or future refactoring easier.

While moving code to a proper .js file is nice there are times we might miss the in-line goodness. Never fear, we can build a JavaScript object containing properties for anything we might need with ease. This equates to passing a name/value pair collection to the JavaScript from the code behind. Take a look at this example:

    ScriptOptions options = new ScriptOptions();
    options.Add("ok", GetResourceString("btnOK"));
    options.Add("oksave", GetResourceString("btnOkSave"));
    options.Add("cancel", GetResourceString("btnCancel"));
    options.Add("viewTitle", GetResourceString("lblAddressEditorView"));
    options.Add("editTitle", GetResourceString("lblAddressEditorEdit"));
    options.Add("createTitle", GetResourceString("lblAddressEditorCreate"));
    options.RegisterOptionsScript(this, "_OptionsAddressEditorResources");

Here we’re using the ScriptOptions class to create an object called _OptionsAddressEditorResources we can access in our script. Now let’s see these options in use:

function fnAddressEditDialog(address, args) {
    //Define the buttons and events
    var buttonList = {};
    buttonList[_OptionsAddressEditorResources.ok]     = function() { return fnAddressEditOnOk(jQuery(this), args); };
    buttonList[_OptionsAddressEditorResources.oksave] = function() { return fnAddressEditOnOkSave(jQuery(this), args); };
    buttonList[_OptionsAddressEditorResources.cancel] = function() { jQuery(this).dialog("close"); };

    //Display the dialog
    jQuery("#addressEditorDialog").dialog({
        title: _OptionsAddressEditorResources.editTitle,
        modal: true,
        width: 535,
        resizable: false,
        buttons: buttonList
    });
}

Above we see the jQuery dialog using the resources contained within the _OptionsAddressEditorResources object.

So this seems simple but pretty powerful. Below is the ScriptOptions class which simply extends a Dictionary and writes out the script creating a named global object. Good luck cleaning up your script hell. Hopefully this will help.

    /// <summary>
    /// Class for generating javascript option arrays
    /// </summary>
    public class ScriptOptions : Dictionary<string, string>
    {
        /// <summary>
        /// Adds the control id to the options script
        /// </summary>
        /// <param name="control">The control.</param>
        public void AddControlId(WebControl control)
        {
            this.Add(control.ID, control.ClientID);
        }

        /// <summary>
        /// Registers all the key/values as an options script for access in the client.
        /// </summary>
        /// <param name="page">The page</param>
        /// <param name="optionsName">Name of the options object</param>
        public void RegisterOptionsScript(Page page, string optionsName)
        {
            if (!page.ClientScript.IsStartupScriptRegistered(page.GetType(), optionsName))
            {
                StringBuilder script = new StringBuilder(string.Format("var {0} = new Object();", optionsName));
                this.Keys.ToList().ForEach(key => script.Append(string.Format("{0}.{1}='{2}';", optionsName, key, this[key])));
                page.ClientScript.RegisterStartupScript(page.GetType(), optionsName, script.ToString(), true);
            } 
        }
    }
Follow

Get every new post delivered to your Inbox.