Data Optimization Using Data Request Objects Implementing IEquatable December 23, 2013
Posted by codinglifestyle in Architecture, ASP.NET, C#, CodeProject.Tags: data, dictionary, GetHashCode, IEquatable, optimization, requests
add a comment
Anyone who has written enterprise software often knows an ideal design and a linear code path are not too common. Even if you are the architect of your application that doesn’t save you from the hoops you must jump through to connect and interact with other systems in your enterprise’s ecosphere. Such as it is in my system to do a seemingly simple task, loading addresses. While there is only one address control and one presenter, when we get to the data layer there are many different code paths depending on the type of address, the selected company, sales areas, and backend systems. For a large order with hundreds of quotes there is theoretically several hundred address controls to be populated efficiently throughout the ordering process.
Before we get going on optimization let’s start with some basics. We have data consumers who want data. These consumers may represent my address control, a table, or any number of components that need data. In addition, there may be hundreds or thousands of them. We can’t allow each instance to simply call our data layer individually or we’ll cripple the system. These data requests need to be managed and optimized.
We are going to encapsulate our data request in an object. If you’re data layer function signature takes 3 parameters simply move these to your new data request object and later we’ll rewrite your data function to take a List<DataRequest> instead of 3 parameters. Of course you may have many parameters or complex objects you need to pass, all the better to encapsulate them! So now we have an object which contains all the information we need to ultimately call the data layer.
When you have hundreds or thousands of data requests there is a very good chance that many of those requests are for the same data. That’s what we’re after here is minimizing the number of calls for actual data. Of course, due to how legacy data functions may be written they may be too narrow in scope. Some queries, for example, may be filtered based on a function parameter which then might require multiple calls to get the complete data required across all data requests. This is the kind of analysis you will need to perform on your own to perhaps bring back the larger data set, cache it, and return pieces of it to individual data requests. One of the great advantages of encapsulating your data requests is analysing them and being able to better satisfy them by rewriting your data layer functions.
Next we must design our controls and other data consumers to be patient. Instead of making a call to get some data which is immediately fulfilled they will instead register a data request. This will give the hundreds or thousands of other data consumers a chance to register their data requests.
Once the registration window is closed we can now trigger our service or presenter to make the necessary data layer call. As eluded to above, we will pass the complete list of data requests to the data layer. We will then have the opportunity to optimize the data requests to minimize the number of actual data calls made and to make them in bulk. This is the part where you might start worrying how to tackle this gargantuan task. What if I told you I could optimize your data requests in just a few lines of code?
//Create data sets of like requests (minimized data requests) // Dictionary<DataLoadAddressesEntity, List> requestsEx = new Dictionary<DataLoadAddressesEntity, List>(); foreach (DataLoadAddressesEntity request in requests) { if (!requestsEx.ContainsKey(request)) requestsEx.Add(request, new List()); else requestsEx[request].Add(request); }
That wasn’t so hard, was it? Now I have a dictionary whose keys represent the minimized number of data calls truly necessary. I call these prime data requests and they are the keys in the dictionary. Each prime data request may then be used to populate the list of equal data requests which are held in the values of the dictionary. So once the prime data request is satisfied we merely need to copy the results across the values in the data set:
////////////////////////////////////////////////////// //Copy prime request results reference across data set // requestsEx[requestPrime].ForEach(r => r.Results = requestPrime.Results);
You might notice that I’ve included a Results property in my data request object. The great thing about encapsulating our request in an object is how handy it is to add more properties to keep everything together. Keep in mind that we are merely copying a reference of the prime request’s results across all like data requests. Therefore, changing one affects all the others, which makes sense but must be understood to not be dangerous. Some developers can go many years without really considering what reference types are so make sure to mentor your team of the basics of value vs reference types. Coming from C++ and the wondrous pointer I take full advantage of references as you will see in my final summary below.
So you must be wondering what voodoo magic I’m using to optimize the data set so easily. Did you read the title? To know if one data request is equal to another it is up to us to implement IEquatable<DataRequest> and override GetHashCode. This is the voodoo that allows us to use Dictionary.ContainKey(datarequest) singling out a prime data request from the secondary data requests. So, how do we decide if one request is equal to another?
With so many permutations and variables in the data layer where does one start? There is no easy answer for this one. It is time for some analysis to boil down what exactly makes one data request different from another. This is the hardest part of the exercise. I started with a spreadsheet, looked at all the variables each code path required, and developed a matrix. I was able to eliminate many of the variables which were the same no matter what type of request it was (CompanyID for example). What appeared an arduous task boiled down to just a few criteria to differentiate requests from one another. Of course, it took hours of eliminating unused variables, proving assumptions that other variables were always equal, and cleaning up the code in order to see the light through the reeds.
Once your analysis is done you now know how to tell if one data request is equal to another so we don’t waste resources making the same call twice. Implementing IEquatable<DataRequest> will have you implementing Equals in your data request object where the comparing type is another data request:
public bool Equals(DataLoadAddressesEntity other)
For each criteria from your analysis, let’s assume we have a property in your data request object. For each criteria, a comparison of of this.Property != other.Property means you return false. If the other data request’s criteria are the same you are both after the same data. So if you fall through all the criteria comparisons return true and you now have one less data call to make.
You must repeat the same logic, in principle, for the GetHashCode override. Instead of comparing the search criteria, this time you are adding up the criteria’s hash codes. So much like above, if you have 2 data requests which need the same data you should also have 2 hash codes which are equal. In this way you can use the dictionary, as above, to optimize the data requests.
Although the criteria that pertains to your data requests will differ I will show mine here as I love seeing examples:
#region IEquatable Members public bool Equals(PartnerFunctionSearchEntity other) { if (!this.AddressType.Equals(other.AddressType)) return false; if (!this.SoldToId.Equals(other.SoldToId)) return false; if (!this.SalesArea.Equals(other.SalesArea)) return false; return SearchCriteria.DictionaryEqual(other.SearchCriteria); } public override int GetHashCode() { unchecked //overflow is ok, just wrap { int hash = 17; const int prime = 31; //Prime numbers hash = hash * prime + AddressType.ToString().GetHashCode(); if (!string.IsNullOrEmpty(SalesArea)) hash = hash * prime + SalesArea.GetHashCode(); if (!string.IsNullOrEmpty(SoldToId)) hash = hash * prime + SoldToId.GetHashCode(); foreach (KeyValuePair<EAddressSearchCriteria, string> keyvalue in SearchCriteria) hash = hash * prime + keyvalue.GetHashCode(); return hash; } } #endregion
You may be wondering where the best place to put the various parts of this solution. I would suggest a service layer which sits between the data consumers and the data layer. In my case with many instances of an address control I placed it in the control’s presenter. As there is a 1:1 relationship between control and presenter the latter contains a member variable which is the data request. On registration it contains only the criteria necessary to get the data. I am using the per request cache (HttpContext.Current.Items) to store my List<DataRequest> where all registered data requests are accumulating.
Remember, my presenter only holds a reference to it’s _Request member variable… the same reference which is in the data request queue and the same reference to which the results will be assigned.
Once registration closes the data layer call is triggered with the list of data requests. The optimization happens here, nearest the source, so as not be repeated. Once the requests are optimized and the actual data calls are made the _Request.Results still held in the presenter’s member variable will be populated and are ready to set to the view for display.
SoapExtensions: A Bad Day with HTTP 400 Bad Requests December 5, 2012
Posted by codinglifestyle in ASP.NET, CodeProject, IIS.Tags: 400, Bad Request, iis7.5, soap, SoapExtension, web service, web.config, xml
1 comment so far
You may have found this post if you were searching for:
- HTTP 400 Bad Request web service
- Response is not well-formed XML web service
- System.Xml.XmlException: Root element is missing web service
- SoapExtension impacting all web services
Yesterday I was debugging an inconsistent issue in production. Thankfully we could track trending recurring errors and began to piece together all incoming and outgoing webservices were being negatively impacted for unknown reasons. This was creating a lot of pressure as backlogs of incoming calls were returning HTTP 400 Bad Request errors. Outgoing calls were silently failing without a facility to retrigger the calls later creating manual work.
We suspected SSO or SSL leading us to change settings in IIS. Being IIS 7.5 this touched the web.config which recycles the app pool. Every time a setting in IIS was changed or an iisreset was issued it seemed to rectify the situation. But after an indeterminate amount of time the problems would resurface.
The culprit ended up being a SoapExtension. The SoapExtension modifies the soap header for authentication when making outgoing calls to a java webservice.
<?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <SOAP-ENV:Header> <h:BasicAuth xmlns:h="http://soap-authentication.org/basic/2001/10/" SOAP-ENV:mustUnderstand="1"> <Name>admin</Name> <Password>broccoli</Password> </h:BasicAuth> </SOAP-ENV:Header> <SOAP-ENV:Body> <m:echoString xmlns:m="http://soapinterop.org/"> <inputString>This is a test.</inputString> </m:echoString> </SOAP-ENV:Body> </SOAP-ENV:Envelope>
It does this with a dynamically loaded (that bit was my fault and made it a complete bitch to debug) SoapExtension taken from a legacy command line util which did this every 5 minutes:
This existed simply because nobody could figure out how to call the webservice directly within the web application. Once incorporated when the web service was called, perhaps hours from an iisreset, the SoapExtension is dynamically loaded. The bug was, even though it was coded to not affect anything but Vendavo, the checks performed were performed too late and therefore all web services, incoming and outgoing, were impacted.
Previously the check was in the After Serialize message handler. The fix was to return the original stream in the ChainStream. The hard part was knowing what webservice was making the call before ChainStream was called. The check was moved to:
public overrides void Initialize(Object initializer)
The initializer object was tested setting a flag used in ChainStream to determine which stream was returned.
So lesson learned, beware SoapExtensions may impact all soap calls. While you can specify a custom attribute to limit the extension to web methods you publish you cannot use this filtering mechanism on webservices you consume. This means you must self-filter or risk affecting all incoming and outgoing web services unintentionally.
Also, dynamically loading a setting which belongs in the web.config was a dumb idea which delayed identification of the problem. Now we use this:
<system.web> <webServices> <soapExtensionTypes> <add type="SBA.Data.PMMSoapExtension, SBA.Data" priority="1" group="High" /> </soapExtensionTypes> </webServices> </system.web>
Ref:
http://www.hanselman.com/blog/ASMXSoapExtensionToStripOutWhitespaceAndNewLines.aspx
http://msdn.microsoft.com/en-ie/magazine/cc164007(en-us).aspx
Software Architect Conference 2012 November 19, 2012
Posted by codinglifestyle in Architecture, ASP.NET, CodeProject, Parallelism.Tags: architect, requirements, software architecture, software design, technical debt
add a comment
I was fortunate enough to have the opportunity to attend the Software Architect Conference this year in London. This is the same group which puts on DevWeek. It was short and sweet, just 2 days without the additional sessions before and after. Often with the daily grind you simply don’t have the time or inclination to challenge yourself with the sort of material presented at these conferences. This is what makes them unique, for a few precious days you are free of distractions to consider how and why we do what we do. I certainly found it useful and some of the speakers where truly impressive. While the technology we use continues to change at the speed of light, the great thing about software architecture is many of the basic principals of building a stable, well-engineered system haven’t changed since medieval times.
Keynote
-
Theme: 21st century architects should aspire to be like medieval “master builders”
- 7 years apprentice, many years to master, administers the project, deals with client, but still a master mason
- Keep coding – credibility with team, mitigates ivory tower
-
20th century software architects
- Stepped away from the code
- UML
- Analysis paralysis
- Ivory Tower syndrome
-
Architecture traps
- Enterprise Architecture Group – not sustainable, disconnected
- CV driven development – ego and fun over needs and requirements
- Going “Post-technical” – no longer involved in programming
-
Software Architecture summed up
- Create a shared vision – get everyone to move in the same direction
-
Architectural lessons learnt lost in Agile – baby out with the bath water
- It is a myth that there is a conflict between good software architecture and agile
-
What we do
- Requirements and constraints
- Evaluate and vet technology
- Design software
- Architectural evaluation
- Code!
- Maintainability
- Technical ownership
- Mentoring
- True team leadership is collaborative / mentoring
- Big picture: Just enough architecture to provide vision enough to move forward
Architectural Styles
-
Architectural definition defines 3 things
- What are the structural elements of the system?
- How are they related to each other?
- What are the underlying principles and rationale to the previous 2 questions?
-
Procedural
- Decompose a program into smaller pieces to help achieve modifiability.
- Single threaded sequential execution
-
RPC Model
- Still procedural: single thread of control
-
Threads
- Decouples activities from main process but still procedural
- Shared data must be immutable or copied
- Some people, when confronted with a problem, think, “I know, I’ll use threads,” and then two they hav erpoblesms.
-
Event based, Implicit Invocation
- The components are modules whose interfaces provide both a collection of procedures and a set of events
- Extensible / free plumbing
- Inversion of control (not dependency inversion)
-
Messaging
- Asynchronous way to interact reliably
- Instead of threads and shared memory use process independent code and message passing
-
Layers
- Regardless of interactions and coupling between different parts of a system, there is a need to develop and evolve them independently
- Each layer having a separate and distinct responsibility following a reasoned and clear separation of concerns
- Often “partitioned” but not true layers due to cross references which sneak in
-
Alternate Layers – spherical
- Core – domain model
- Inner crust – services wrapped around core
- Outer crust – wrapped external dependencies
-
Micro-kernel / Plug-in
- Small hub with everything plugged in
- Separates a minimal functional core from extended functionality and customer-specific parts
-
Shared repository
- DB and the like
- Procedures secondary, data is king!
- Maintain all data in a central repository shared be all functional components of the data-driven application and let the availability, quality, and state of that data trigger and coordinate the control flow of the application logic.
-
Pipes & Filters
- Divide the application’s task into several self-contained data processing steps and connect these steps to a data processing pipeline via intermediate data buffers.
- Process & queue → process & queue → process & queue
The Architecture of an Asynchronous Application
- Heavy focus on messaging throughout talk
-
About Messaging
- Guaranteed delivery at a cost
- Reliable and scalable
-
Subscription models
- 1 : n
- Round robin
- Publish / Subscribe
-
Messaging Terms
- Idempotency – will doing something twice change data / state?
- Poison message – situation where a message keeps being redelivered (perhaps because an exception is thrown before an ack is returned to queue)
-
Messaging platforms
- MSMQ – MS specific (personally found it easy enough to use)
- IBM MQ
- NServiceBus
- RabbitMQ – multiplatform, Multilanguage binding. Mentioned in numerous talks and focus of talk.
-
SignalR – interesting client-side messaging platform could be a more powerful model than using web services on the client
- install-package SignalR with NuGet
- Picks best available connection method
- Push from server to client
- Broadcast to all or to a specific client
Async with C# 5
- This talk is largely about Tasks and iterates through several examples of an application trying various asynchronous styles. The point is to try to get a minimal syntax such that an asynchronous application can be written is the same number of lines as a procedural program.
-
Context – must know the identity of which thread is executing. Critical in UIs and error handling
- SynchronizationContext class can revert thread context to calling thread (as can several other methods such as Invoke)
-
Tasks – a piece of asynchronous functionality
- Uses continuations to handle results
-
Async keyword – marks a function to allow use of the await keyword. Must return void or a Task.
private async void CalculatePi()
{
// Create the task which runs asynchronously.
Task<double> result = CalculatePiAsync();
// Calls the method asynchronously.
await result;
// Display the result.
textBox1.Text += result;}
- Putting a try/catch around this an the compiler will ensure that the error is rethrown in the correct context.
- Automatic use of thread pool which measures throughput to scale number of running threads up or down, as appropriate
-
Progress / Cancellation Features
- IProgress<T>
-
Can launch a collection of classes and then use different operation types such as
- var task = Task.WhenAny(tasks);
- which returns when the first task completes. Or use Task.WhenAll to wait for all tasks.
- WCF can generate the async methods to use tasks when adding Service References -> Advanced.
Inside Requirements
- Kevlin Henny, author 97 Things Every Programmer Should Know and Pattern Oriented SW Arch
- While listening to requirements we often stop listening while jumping ahead to solutions
- Killer question when cutting through nefarious design agendas: “What problem does this solve?”
- Patterns often misapplied – using a hammer to drive a screw leading to a pattern zoo
- Composing a solution to a problem rather than analysis to understand the problem
- Many to many relationships don’t need to be normalized (they model the real world)
- Describing is not the same a prescribing
-
A model is an abstraction of a point of view for a purpose
- Good – omits irrelevant detail
- Bad – omits necessary detail
-
RM-ODP: reference model using viewpoints a way of looking at a system / environment
- Enterprise – What does it do for the business?
- Information – What does it need to know?
- Computational – Decomposition into parts and responsibilities
- Engineering – Relationship of parts
- Technology – How will we build it?
-
Use Case
-
Use inverted pyramid style to place most important detail at the top. Move post-condition next to pre-condition. Sequence, containing detail about how you accomplish the steps in-between pre and post at bottom as only interest to implementers.
- Intent
- Pre-condition
- Post-condition
- Sequence – lots of juicy detail but actually least important from an architecture point of view
-
-
User Story
-
Traditional Connextra form
- As a <role>,
- I want <goal/desire>
-
So that <benefit>
- As an Account Holder
- I want to withdraw cash from an ATM
- So that I can get money when the bank is closed
-
Dan North scenario form
- Given <a context>
- When <a particular event occurs>
-
Then <an outcome is expected>
- Scenario 1: Account has sufficient funds
- Given the account balance is \$100
- And the card is valid
- And the machine contains enough money
- When the Account Holder requests \$20
- Then the ATM should dispense \$20
- And the account balance should be \$80
- And the card should be returned
-
-
Problems with the Use Case / User Story approach
- Observations are always made through a filter or world-view
- Until told what to observe you don’t know what you’ll get. In that case, is it even relevant?
- Use Case Diagrams neglect to notice they are fundamentally text/stories
-
Context Diagrams – shows the world and relationships around the system (UML actors)
- Litmus test: what industry does the diagram apply to?
- Not a technical decomposition
-
You’re an engineer planning to build a bridge across a river. So you visit the site. Standing on one bank of the river, you look at the surrounding land, and at the river traffic. You feel how exposed the place is, and how hard the wind is blowing and how fast the river is running. You look at the bank and wonder what faults a geological survey will show up in the rocky terrain. You picture to yourself the bridge that you are going to build. (Software Requirements & Specifications: “The Problem Context”)
- An analyst trying to understand a software development problem must go through the same process as the bridge engineer. He starts by examining the various problem domains in the application domain. These domains form the context into which the planned Machine must fit. Then he imagines how the Machine will fit into this context. And then he constructs a context diagram showing his vision of the problem context with the Machine installed in it.
- Problem Frame approach – describe a problem in diagrams
-
Grady Booch
- Use centric – visualization and manipulation of objects in a domain
- Datacentric – integrity persisting objects
- Computational centric – focus on transforming objects
- In summary: move from ignorance / assumptions → knowledge gathered from multiple points of view
A Team, A System, Some Legacy… and you
- Legacy System – so valuable it can’t be turned off (and it’s paid for!)
- Be aware a legacy system often comes with a legacy team engrained in their own methods
-
Being late to the party
- Software architecture often seems valuable only once things have gone wrong.
- Architects often join existing projects with to help improve difficult situations
- Often a real sense of urgency to “improve”
- Avoid distancing self to ivory tower and likewise avoid digging in thus losing big picture focus
- Software architecture techniques offer a huge value for older or troubled projects. Especially techniques to understand where you are and with whom
-
Stage 1: Understand
-
Right perspective
- See gathering requirements for perspectives of end user, business management, IT Managers, development, and support
-
Automated analysis tools
- NDepend, Lattix, Stucture 101, Sonar
- Dependency analysis
- Metrics
-
Monitor / Measure
-
Leverage existing production metrics
- IIS
- Oracle Enterprise Manager
- Implementation metrics
- Stakeholder opinions
-
-
Architectural Assessment
-
Systems Quality Assessment
- Context and stakeholder requirements
- Functional and deployment views
- Monitor and measure
- Automated analysis
-
Assessment Patterns
- ATAM – architectural trade off analysis method
- LAAAM – Lightweight architectural assessment method- more practical
-
TARA – tiny architectural review approach (recommended)
-
-
Minimal Modelling
- Define notation / terminology
-
Break up system to different viewpoints
- Functional
- Data
- Code
- Runtime
- Deployment – systems / services
- Ops – run, controlled, roll-back
- Focus on essentials for target audience
-
Deliverable:
- System context and requirements
- Functionality and deployment views
- Improve Analysis
- Requirements Assessment
- Identity and report
- Conclusion for sponsor
- Deliver findings and recommendations
-
-
Stage 2: Improve
- Team must be involved or rocketing risk affecting morale, confidence, competence
-
Choices based on risk
- Assess -> Prioritize -> Analyse -> Mitigate
-
Engage in Production
-
Why
- Reality check
-
How
- Monitoring, stats, and incidence management
-
Who
- Biz man, IT man, support
-
-
Tame the Support Burden
- Drain on development
- Support team can offset this
- Avoid “over the wall” mentality
-
Continuous Integration and Deployment
- Start simple
- Increased efficiency and reliability
-
Automated Testing
- Unit test + coverage, regression tests
- Costly
-
Safe step evolution
- Control risk
- Wrap with tests
- Partition
- Simplify
- Improve
- Generalize
- Repeat
-
Stay coding – but if a pure architect stay off the critical path
- Beware ROI of your coding skills vs. architect’s skills
- Refactor, write unit tests, address warnings
-
Define the future
- Good for the team
- Clear, credible system architecture for the medium term (1-2 years)
- Beware: timing and predictions
Technical Debt
- As an evolving program is continually changed, its complexity (reflecting deteriorating structure) increases unless work is done to maintain or reduce it
- Technical Debt is a metaphor developed by Ward Cunningham to help us think about the above statement and choices we make about the work required to maintain a system
- Like a financial debt, the technical debt incurs interest payments, which comes in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into a better design
- Sometimes, upon reflection, it is better to pay interest. But are we trapped paying so much interest we can never get ahead?
-
What is the language of debt?
- Amortise, repayment, balance, write off, restructure, asset, interest, default, credit rating, liability, principal, load, runaway, loan, consolidation, spiralling, value
- Shipping first time code is like going into debt. A little debt can speed delivery so long as it is paid back promptly with a rewrite
- The danger is ignoring or not paying back the debt (compound interest!)
- Rebuttal: A mess is not a technical debt. A mess is just a mess.
- Counter response: The useful distinction isn’t between debt or non-debt, but between prudent and reckless debt.
-
There is also a difference between deliberate debt and inadvertent debt.
- There is little excuse for introducing reckless debt
- Awareness of technical debt is the responsibility of all roles
- Consideration of debt must involve practice and process
-
Management of technical debt must account for business value
-
Perfection isn’t possible, but understanding the ideal is useful
Books, People, and Topics of Note
- Simon Brown – www.codingarchitecture.com
- Alan Holub – www.holub.com
- Kevlin Henney – Pattern Oriented Software Architecture
- Grady Booch – architecture vs. design
- Linda Rising
- George Fairbanks – Just Enough Software Architecture
- Roy Osherove – Notes to a Software Team Leader
- Top 10 Traits of a Rockstar Software Developer
- Becoming a Technical Leader – Gerald Weinberg
- 101 Things I Learned in Architecture School
- Architecting Enterprise Solutions
- Software Architecture – Perspectives of an Emerging Discipline
- Software Requirements and Specification – Michael Jackson
- Problem Frames – Michael Jackson
- 12 Essential Skills For SW Arch
- Refactoring to Patterns
- Managing Software Debt
- Modernizing Legacy Systems
- Working Effectively with Legacy Code
- Growing Object-Oriented Software, Guided by Tests
- Knockout.js – MVVM javascript library. Takes JSON and allows you to connect to HTML in a simple way I presume w/o the manual jQuery work of redrawing your control (e.g. autocomplete textbox)
- Backbone.js – model / view extension with events
- Parasoft Jtest smoke test
- Selenium automation UI test
- RabbitMQ – client side messaging queue
- LightStreamer / SignalIR – web sockets for client (stop gap for HTML5?)
ScriptArguments: An easy way to programmatically pass arguments to script from codebehind January 13, 2012
Posted by codinglifestyle in AJAX, ASP.NET, C#, CodeProject, Javascript.Tags: ajax, arguements, codebehind, function signature, Javascript, programmatic
1 comment so far
During my on-going adventures AJAXifying a crusty old business app I have been using a methodology by which most client events are setup in codebehind. The reason for this is I have easy access to my client ids, variables, and resources in codebehind. By constructing the script function calls at this stage, I can avoid messy and fragile in-line code. What I am endeavouring to do is remove all script from the markup itself. So instead of having MyPage.aspx with script mixed with markup I have MyPage.js and all functions here. Separate js files avoid fragile in-line code which only fails at runtime, can’t be refactored, and doesn’t play as nice with the debugger. Besides, separation of markup and script is good!
The downside to setting up all this script in the codebehind is it didn’t take long for the number of arguments to grow and become unruly. My script function signature looked like this:
function fnAddressChange(ddId, labelId, checkId, sameAsId, hidSelectId, hidSameAsId, onSelectEvent)
And in the codebehind I had this:
string selectArgs = string.Format("'{0}', '{1}', '{2}', '{3}', '{4}', '{5}'", _DropDownAddress.ClientID, _LabelAddress.ClientID, _RowSameAs.ChildClientID, (SameAs && _SameAsAddress != null) ? _SameAsAddress.LabelControl.ClientID : "-1", _HiddenSelectedID.ClientID, _HiddenSameAs.ClientID); string selectScript = string.Format("fnAddressSelect({0}); ", selectArgs); string changeScript = string.Format("fnAddressChange({0}, '{1}'); ", selectArgs, OnClientSelect);
We can see selectArgs is getting out of control. Not only is it getting ridiculous to add more to it, the function signature in script is getting huge and the ordering is easier to mess up. So I came up with this solution:
ScriptArguments args = new ScriptArguments (); args.Add("ddId", _DropDownAddress.ClientID); args.Add("labelId", _LabelAddress.ClientID); args.Add("checkId", _RowSameAs.ChildClientID); args.Add("sameAsId", (SameAs && _SameAsAddress != null) ? _SameAsAddress.LabelControl.ClientID : "-1"); args.Add("hidSelectId", _HiddenSelectedID.ClientID); args.Add("hidSameAsId", _HiddenSameAs.ClientID);
Not only is the codebehind cleaner but I don’t have to worry about string.Format or the order in which I add arguments in. The resulting script generated is:
args.ToString() "{ ddId : 'ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__DropDownAddress', labelId : 'ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__LabelAddress', checkId : 'ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__RowSameAs_FormField_CheckBox', sameAsId : '-1', hidSelectId : 'ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__HiddenSelectedID', hidSameAsId : 'ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__HiddenSameAs' }"
This is a javascript Object with a property per key set to the corresponding value. So in script I only need to take in one argument, the argument object. I can then access every piece of information inserted in to ScriptArguments via the correct key:
function fnAddressIsReadOnly(args) { alert(args.ddId); alert(args.labelId); }
Will alert me with “ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__DropDownAddress” and “ctl00__ContentMain__ControlOrderSoldTo__AddressSoldTo__LabelAddress”.
The great thing is how simple this was to implement:
public class ScriptArguments : Dictionary<string, string> { public override string ToString() { StringBuilder script = new StringBuilder("{ "); this.Keys.ToList().ForEach(key => script.AppendFormat("{0} : '{1}', ", key, this[key])); script.Remove(script.Length - 2, 2); script.Append(" }"); return script.ToString(); } }
This simple class solves a simple problem. I hope you find it useful.
FindControl: Recursive DFS, BFS, and Leaf to Root Search with Pruning October 24, 2011
Posted by codinglifestyle in ASP.NET, C#, CodeProject, jQuery.Tags: ASP.NET, BFS, DFS, extension methods, FindControl, jQuery, pruning, tree
add a comment
I have nefarious reason for posting this. It’s a prerequisite for another post I want to do on control mapping within javascript when you have one control which affects another and there’s no good spaghetti-less way to hook them together. But first, I need to talk about my nifty FindControl extensions. Whether you turn this in to an extension method or just place it in your page’s base class, you may find these handy.
We’ve all used FindControl and realized it’s a pretty lazy function that only searches its direct children and not the full control hierarchy. Let’s step back and consider what we’re searching before jumping to the code. What is the control hierarchy? It is a tree data structure whose root node is Page. The most common recursive FindControl extension starts at Page or a given parent node and performs a depth-first traversal over all the child nodes.
Search order: a-b-d-h-e-i-j-c-f-k-g
/// <summary> /// Recurse through the controls collection checking for the id /// </summary> /// <param name="control">The control we're checking</param> /// <param name="id">The id to find</param> /// <returns>The control, if found, or null</returns> public static Control FindControlEx(this Control control, string id) { //Check if this is the control we're looking for if (control.ID == id) return control; //Recurse through the child controls Control c = null; for (int i = 0; i < control.Controls.Count && c == null; i++) c = FindControlEx((Control)control.Controls[i], id); return c; }
You will find many examples of the above code on the net. This is the “good enough” algorithm of choice. If you have ever wondered about it’s efficiency, read on. Close you’re eyes and picture the complexity of the seemingly innocent form… how every table begets rows begets cells begets the controls within the cell and so forth. Before long you realize there can be quite a complex control heirarchy, sometimes quite deep, even in a relatively simple page.
Now imagine a page with several top-level composite controls, some of them rendering deep control heirachies (like tables). As the designer of the page you have inside knowledge about the layout and structure of the controls contained within. Therefore, you can pick the best method of searching that data structure. Looking at the diagram above and imagine the b-branch was much more complex and deep. Now say what we’re trying to find is g. With depth-first you would have to search the entiretly of the b-branch before moving on to the c-branch and ultimately finding the control in g. For this scenario, a breadth-first search would make more sense as we won’t waste time searching a complex and potentially deep branch when we know the control is close to our starting point, the root.
Search order: a-b-c-d-e-f-g-h-i-j-k
/// <summary> /// Finds the control via a breadth first search. /// </summary> /// <param name="control">The control we're checking</param> /// <param name="id">The id to find</param> /// <returns>If found, the control. Otherwise null</returns> public static Control FindControlBFS(this Control control, string id) { Queue<Control> queue = new Queue<Control>(); //Enqueue the root control queue.Enqueue(control); while (queue.Count > 0) { //Dequeue the next control to test Control ctrl = queue.Dequeue(); foreach (Control child in ctrl.Controls) { //Check if this is the control we're looking for if (child.ID == id) return child; //Place the child control on in the queue queue.Enqueue(child); } } return null; }
Recently I had a scenario where I needed to link 2 controls together that coexisted in the ItemTemplate of a repeater. The controls existed in separate composite controls.
In this example I need to get _TexBoxPerformAction’s ClientID to enable/disable it via _ChechBoxEnable. Depending on the size of the data the repeater is bound to there may be hundreds of instances of the repeater’s ItemTemplate. How do I guarantee I get the right one? The above top-down FindControl algorithms would return he first match of _TextBoxPerformAction, not necessarily the right one. To solve this predicament, we need a bottom-up approach to find the control closest to us. By working our way up the control hierarchy we should be able to find the textbox within the same ItemTemplate instance guaranteeing we have the right one. The problem is, as we work our way up we will be repeatedly searching an increasingly large branch we’ve already seen. We need to prune the child branch we’ve already seen so we don’t search it over and over again as we work our way up.
To start we are in node 5 and need to get to node 1 to find our control. We recursively search node 5 which yields no results.
Next we look at node 5’s parent. We’ve already searched node 5, so we will prune it. Now recursively search node 4, which includes node 3, yielding no results.
Next we look at node 4’s parent. We have already searched node 4 and its children so we prune it.
Last we recursively search node 2, which includes node 1, yielding a result!
So here we can see that pruning saved us searching an entire branch repeatedly. And the best part is we only need to keep track of one id to prune.
/// <summary> /// Finds the control from the leaf node to root node. /// </summary> /// <param name="ctrlSource">The control we're checking</param> /// <param name="id">The id to find</param> /// <returns>If found, the control. Otherwise null</returns> public static Control FindControlLeafToRoot(this Control ctrlSource, string id) { Control ctrlParent = ctrlSource.Parent; Control ctrlTarget = null; string pruneId = null; while (ctrlParent != null && ctrlTarget == null) { ctrlTarget = FindControl(ctrlParent, id, pruneId); pruneId = ctrlParent.ClientID; ctrlParent = ctrlParent.Parent; } return ctrlTarget; } /// <summary> /// Recurse through the controls collection checking for the id /// </summary> /// <param name="control">The control we're checking</param> /// <param name="id">The id to find</param> /// <param name="pruneClientID">The client ID to prune from the search.</param> /// <returns>If found, the control. Otherwise null</returns> public static Control FindControlEx(this Control control, string id, string pruneClientID) { //Check if this is the control we're looking for if (control.ID == id) return control; //Recurse through the child controls Control c = null; for (int i = 0; i < control.Controls.Count && c == null; i++) { if (control.Controls[i].ClientID != pruneClientID) c = FindControlEx((Control)control.Controls[i], id, pruneClientID); } return c; }
Now we have an efficient algorithm for searching leaf to root without wasting cycles searching the child branch we’ve come from. All this puts me in mind jQuery’s powerful selection capabilities. I’ve never dreamed up a reason for it yet, but searching for a collection of controls would be easy to implement and following jQuery’s lead we could extend the above to search for far more than just an ID.
Pass a Name Value Pair Collection to JavaScript August 8, 2011
Posted by codinglifestyle in ASP.NET, CodeProject, Javascript.Tags: ASP.NET, Javascript, jQuery
1 comment so far
In my crusade against in-line code I am endevouring to clean up the script hell in my current project. My javascript is littered these types of statements:
var hid = <%=hidSelectedItems.ClientId%>; var msg = <%=GetResourceString('lblTooManyItems')%>;
Part of the cleanup is to minimize script on the page and instead use a separate .js file. This encourages me to write static functions which take in ids and resources as parameters, allows for easier script debugging, and removes all in-line code making maintenance or future refactoring easier.
While moving code to a proper .js file is nice there are times we might miss the in-line goodness. Never fear, we can build a JavaScript object containing properties for anything we might need with ease. This equates to passing a name/value pair collection to the JavaScript from the code behind. Take a look at this example:
ScriptOptions options = new ScriptOptions(); options.Add("ok", GetResourceString("btnOK")); options.Add("oksave", GetResourceString("btnOkSave")); options.Add("cancel", GetResourceString("btnCancel")); options.Add("viewTitle", GetResourceString("lblAddressEditorView")); options.Add("editTitle", GetResourceString("lblAddressEditorEdit")); options.Add("createTitle", GetResourceString("lblAddressEditorCreate")); options.RegisterOptionsScript(this, "_OptionsAddressEditorResources");
Here we’re using the ScriptOptions class to create an object called _OptionsAddressEditorResources we can access in our script. Now let’s see these options in use:
function fnAddressEditDialog(address, args) { //Define the buttons and events var buttonList = {}; buttonList[_OptionsAddressEditorResources.ok] = function() { return fnAddressEditOnOk(jQuery(this), args); }; buttonList[_OptionsAddressEditorResources.oksave] = function() { return fnAddressEditOnOkSave(jQuery(this), args); }; buttonList[_OptionsAddressEditorResources.cancel] = function() { jQuery(this).dialog("close"); }; //Display the dialog jQuery("#addressEditorDialog").dialog({ title: _OptionsAddressEditorResources.editTitle, modal: true, width: 535, resizable: false, buttons: buttonList }); }
Above we see the jQuery dialog using the resources contained within the _OptionsAddressEditorResources object.
So this seems simple but pretty powerful. Below is the ScriptOptions class which simply extends a Dictionary and writes out the script creating a named global object. Good luck cleaning up your script hell. Hopefully this will help.
/// <summary> /// Class for generating javascript option arrays /// </summary> public class ScriptOptions : Dictionary<string, string> { /// <summary> /// Adds the control id to the options script /// </summary> /// <param name="control">The control.</param> public void AddControlId(WebControl control) { this.Add(control.ID, control.ClientID); } /// <summary> /// Registers all the key/values as an options script for access in the client. /// </summary> /// <param name="page">The page</param> /// <param name="optionsName">Name of the options object</param> public void RegisterOptionsScript(Page page, string optionsName) { if (!page.ClientScript.IsStartupScriptRegistered(page.GetType(), optionsName)) { StringBuilder script = new StringBuilder(string.Format("var {0} = new Object();", optionsName)); this.Keys.ToList().ForEach(key => script.Append(string.Format("{0}.{1}='{2}';", optionsName, key, this[key]))); page.ClientScript.RegisterStartupScript(page.GetType(), optionsName, script.ToString(), true); } } }
Yet Another VS2010 Overview June 18, 2010
Posted by codinglifestyle in ASP.NET, C#, Parallelism, Visual Studio 2010.add a comment
Today I attended a mediocre presentation by Paul Fallen which looked stellar compared to the atrocious overview put on at the Galway VS2010 Launch Event. Paul had the look of a man who had seen these slides many times and glossed over them at speed. In fairness, he was using the same presentation deck I’ve seen since TechEd 2008. I think we had all seen several flavours of this overview by this time, so nobody seemed to mind. Below are the few snippets of information to add to the smorgasbord of other snippets I’ve gleaned from other talks of this nature.
Please click here for more comprehensive posts earlier on VS2010.
Here is the VS2010 Training Kit which was used in the demos.
- Common Language Runtime
- Latest version is CLR 4 (to go with .NET 4).
- Previous version of CLR 2 encompassed .NET 2, 3, 3.5, 3.5SP1
- Implications
- App pool .NET Framework version
- Incompatibilities running CLR frameworks side by side within same process
- Think 3 COM objects accessing Outlook all using CLR1, 2, and 4
- Managed Extensibility Framework (MEF)
- Library in .NET that enables greater reuse of applications and components
- VS2010 & C# 4.0
- IDE
- WPF editor – Ctrl + mouse wheel to zoom. Handy for presentations
- Box select (like command prompt selection)
- Breakpoint labelling, import/export, Intellitrace (covered below)
- Code navigation improvements (Ctrl + , and Ctrl + – for back)
- Call Hierarchy
- Allows you to visualize all calls to and from a selected method, property, or constructor
- Improved Intellisense
- Greatly improved javascript intellisense
- Support for camel case
- Can toggle (Ctrl + Space) between suggestive and consume first mode (handy for TDD)
- Test run record, fast forward
- Better multi-monitor support, docking enhancements
- Tracking requirements and tasks as work items
- WPF editor – Ctrl + mouse wheel to zoom. Handy for presentations
- Better control over ClientID
- Routing moved out from MVP to general ASP.NET
- Optional and named parameters
- Improved website publishing, ClickOnce (see prev. posts)
- IDE
- Parallelism
- Pillars
- Task Parallel Library (TPL)
- He didn’t touch at all on the new task concept
- Parallel LINQ (PLINQ)
- These are the extension methods to LINQ to turn query operators in to parallel operations.
- var matches = from person in people.AsParallel()
- where person.FirstName == “Bob”
- select person;
- These are the extension methods to LINQ to turn query operators in to parallel operations.
- System.Threading significant updates
- Coordination Data Structures (CDS)
- Lightweight and scalable thread-safe data structures and synchronization primitives
- Task Parallel Library (TPL)
- Toolset
- Debugger: record and visualize threads
- Visualizer: View multiple stacks
- IntelliTrace – new capability to record execution, play it backwards and forwards, even email it to another engineer and have them reproduce the problem on their box
- Other
- Eventual depreciation of ThreadPool as higher level abstractions layer atop toolkit?
- Unified cancellation using cancellation token
- Pillars
- Dynamic Language Runtime (DLR)
- New feature in CLR 4
- Major new feature in C# 4 is dynamic type
- What Linq was to C# 3.5
- Adds shared dynamic type system, standard hosting model and support to make it easy to generate fast dynamic code
- Big boost working with COM: type equivalence, embedded interop, managed marshalling
- Windows Communication Framework (WCF)
- Service discovery
- Easier to discover endpoints
- Imagine an IM chat program or devices that come and go
- REST support via WCF WebHttp Services
- Available in the code gallery templates
- Service discovery
CustomValidator and the ValidationSummary Control April 26, 2010
Posted by codinglifestyle in ASP.NET, jQuery, Uncategorized.Tags: jQuery, validators
3 comments
ASP.NET validators can be tricky at times. What they actually do isn’t particularly hard, but we have all had issues with them or quickly find their limits when they don’t meet our requirements. The CustomValidator control is very useful for validating outside the constraints of the pre-defined validators: required fields, regular expressions, and the like which all boil down to canned javascript validation. CustomValidators are brilliant as you can write your own client-side functions and work within the ASP.NET validation framework. They are also unique in that they allow for server-side validation via an event.
However, there is a common pitfall when used in combination with the ValidationSummary control. Normally, I would avoid using ShowMessageBox option as I believe pop-ups are evil. However, where I work this is the norm and the problem is the CustomValidator’s error isn’t represented in the summary popup.
When the ASP.NET validators don’t live up to our requirements we really must not be afraid to poke around Microsoft’s validation javascript. It contains most of the answers to the questions you read about on the net (to do with ASP.NET validation… it isn’t the new Bible/42). Quickly we identify the function responsible for showing the pop-up. ValidationSummaryOnSubmit sounds good, but as the name implies it occurs only on submit. However my validator failed after submit and now I need the popup to show what errors occurred. I could see from the script window that this function could be called but programmatically registering the startup script wasn’t working. So I used a jQuery trick to call the function after the DOM had loaded.
So drumroll please, there is the information you want to copy and paste in to your CustomValidator event:
if (!args.IsValid)
{
ScriptManager.RegisterStartupScript(this, this.GetType(), “key”, “$(function() { ValidationSummaryOnSubmit(‘MyOptionalValidationGroup’)});”, true);
}
Now my server-side validation will bring up the ValidationSummary messagebox.
Unlock User or Reset Password via Database query – ASP.NET Membership February 13, 2010
Posted by codinglifestyle in ASP.NET, CodeProject.Tags: ASP.NET, membership, password, reset, roles, unlock, user
4 comments
This morning I was logging in to my website and couldn’t log in. My personal site uses the out-of-the-box ASP.NET v2 membership and roles. This took a while to determine what was wrong because my own website didn’t tell me much, using a blanket unsuccessful message for any problem. This lead me to believe my password was wrong or worse that my site had been hacked and the password changed!
It turned out I entered the wrong password too many times and locked myself out. However, my site wasn’t programmed to tell me I was locked out (see here for improvement). I probably entered the right password loads of times, but couldn’t tell because my account was locked. Once I figured this out the easiest way to unlock the user was via the SQL query window as my site is deployed on an ISP. You can unlock programatically, but I wasn’t sure how to via the database directly. Luckily, a quick look through the sprocs revealed what I was looking for and the day was saved:
DECLARE @return_value int
EXEC @return_value = [dbo].[aspnet_Membership_UnlockUser]
@ApplicationName = N‘applicationName’,
@UserName = N‘user’
SELECT ‘Return Value’ = @return_value
GO
If you don’t know your application name, the query below can be handy. If you need to reset your password you can use the information obtained by this query along with the sproc below. First, create a new user or you can use an existing user with a known password. Next, execute the query below.
SELECT au.username, aa.ApplicationName, password, passwordformat, passwordsalt
FROM aspnet_membership am
INNER JOIN aspnet_users au
ON (au.userid = am.userid)
INNER JOIN aspnet_applications aa
ON (au.applicationId = aa.applicationid)
Now that you have a valid password, salt, and password type you can set that password information to the account which needs to be reset. So take the valid password, salt, and password format and put it in the sproc below along with the application name and user which needs to be reset.
–Prepare the change date
DECLARE @changeDate datetime
set @changeDate = getdate()
–set the password
exec aspnet_Membership_setPassword ‘applicationName’,
‘user’,
‘password’,
‘passwordsalt’,
@changeDate,
Passwordformat
Execute. Now both users have the same password. Good luck!
Ref: http://aquesthosting.headtreez.com/doc/b873561c-ab7a-4a8e-9934-cc9366af8a81,http://mitchelsellers.com/Blogs/tabid/54/EntryID/23/Default.aspx, http://msdn.microsoft.com/en-us/library/system.web.security.membershipuser.unlockuser.aspx