Thanks to Chris Terman and MIT open courseware. These notes are from an MIT lecture found here
What we want in a memory.
Technologies for memories. (see table)
SRAM Memory Cell
1-T Dynamic RAM
Challenge to cope with Quality vs. Quantity
Key idea: Best of both worlds using Memory hierarchy
Memory reference patterns. Locality for program, stack and data
Exploiting the Memory Hierarchy
The Cache Idea: Program-Transparent Memory Hierarchy
How high of a Hit Ratio do we need?
The Cache Principle
Direct Mapped Cache
Contention Problem: Contention, Death and Taxes
Professor talks about the detailed low level details of memory, addr, DIN/DOUT.
Two kinds of memories:
2-port main memory: One port for program counter and get back an instruction, the other port is to use load and store instructions, computing a memory address with an offset to get data.
Register file: Built into the CPU data, two register operands for each instruction. Same organization as 2-port memory.
Technologies for memories:
100’s of bits
100’s of Kbytes
1000’s of Mbytes
100’s of Gbytes
The real bottleneck is if we have to fetch each instruction from the memory, there is a high order of latency even though the processor is very fast.
In past the speed of the processor has improved with CMOS technologies. The capacity of DRAM has increased, as the size of the transistors get smaller and smaller, but the latency in the DRAM which are dictated by the size of the memory, have not increased dramatically as compared to processor.
Static Ram – A technology that is used in our register file (one of the types of memory mentioned above). Professor talks about the low-level gates and transistors of SRAM that uses inverters. There is a static bi-stable storage element. The writes of bits “overpower” the reads.
We can build multi-port SRAMs. One can increase the number of SRAM ports by adding access transistors. By carefully sizing the inverter pair, so that one is strong and the other is weak, we can assure that our WRITE bus will only fight with the weaker one, and the READs are driven by the stronger one – thus minimizing both access and write times.
1-T Dynamic Ram
It is a high capacity memory system, is much simpler – involves six transistors/cell may not sound much, but they can add up quickly. What is the fewest number of transistors that can be used to store a bit? This is determined by area, better dielectric, thinner film, there is a formula to calculate that.
The interesting idea here is that every 10ms your computer is reading all the data in the memory and writing it back again so that it does not get lost.
A trick to increase throughput with the idea of pipelining. Send over the address in couple different chunks.
Synchronous DRAM (SDRAM)
Double-clocked Synchronous Memory (DDR)
The idea of DDRRAM is that it uses a clock transmission protocol. The reason the machine is slow because fetching the data from this memory system is slow.
Average latency = 4ms
Average seek time = 9ms
Transfer rate = 20Mbytes/sec
Capacity = 1TB
Cost <= $1/Gbytes
Spinning tracks: 7000 – 15000 RPM
There are cylinders with level of discs. Discs have tracks which are divided into sectors. The shaft and the read/write head is a mechanical device. Information is stored in concentric circles to minimize randomization of head.
Quantity vs Quality
Your memory can be BIG and slow …. or …
SMALL and FAST.
Is there an architectural solution to this DILEMMA.
We can nearly get our wish.
KEY: Use a hierarchy of memory technologies
Keep the most often-used data in a small, fast SRAM (often local to CPU chip)
Refer to Main Memory only rarely, for remaining data.
The reason this strategy works: LOCALITY
Statistically researchers have found a memory reference pattern. See diagram (21:03).
Program: Branching factor also affects the speed, usually if-else statements that branch program paths out.
Stack: At any given moment we are using a small amount of the stack in a program – called the activation records for the current subroutine.
Data: Copying data from one data structure to another or performing computation on it.
Exploiting the Memory Hierarchy
Approach1: (Cray, others): Expose Hierarchy
Hardware types: SMOP – As hardware guys get lazy they push the programmer to write smarter programs. Until recently these were the fastest machines on earth, Cray super computers. The argument of this type by Seymour Cray was that you cannot fake something that you do not have. And that is fake a huge faster memory.
Register, Main Memory
Disk each available as storage alternatives
Tell programmers: “Use them cleverly”
Approach2: Hide Hierarchy
Here the idea – the hardware looks over the shoulder, and manages of locality of reference. This is a layer abstraction that does a memory management.
Programming model: SINGLE kind of memory, single address space
Machine AUTOMATICALLY assigns location to fast or slow memory depending on usage patterns.
CPU looks at small static cache (usually L1/L2) and then the DRAM and then Hard disk. Most of what you buy in a processor is the cache memory. The size of the cache is important. Ideally you want most information to be found in the yellow box (small static cache).
The Cache Idea: Program-Transparent Memory Hierarchy
Cache contains “temporary copies” of selected main memory locations.
Challenge is to make hit ratio as high as possible.
Suppose we can easily build an on-chip static memory with a 4ns access time, but the fastest DRAM that we can buy for main memory has an average access time of 40ns. How high of a hit rate do we need to sustain an average speed of 5ns? (Only slightly slower than cache?)
Over 97% of the time the instruction should be in the small cache. Over a period of time there is a subset of instructions that the processor can process and will need for computation. If the cache is big enough to accommodate that then we can achieve our hit ratio. The amount of time the CPU takes to process that should be balanced with amount of time to load the misses.
The Cache Principle
ALGORITHM: Look nearby for the requested information first, if it’s not there, check secondary storage.
Basic cache algorithm:
Cache knows two things, which addresses it has and the contents of them. CPU if there is a hit for the data in the cache, it can update the data. And then it is the cache’s responsibility to update it in the main memory. If there is a miss, then cache has to replace something from cache and replace it with something from the main memory.
Associativity: Parallel Lookup
Look at every row or line of the cache, and see if it has what CPU is looking for, all in parallel. Any data item can be located in any cache location. Fully –associative cache are very expensive and we need half of the register area to store address.
A cheaper alternative to associative cache. This is non-associative, where it indexes the data and look-up serially (as opposed to parallel). The basic idea is to use a table-index to find memory location quickly, because parallel operation of the same is expensive.
Problem: Contention and cache conflicts. Improve the mapping indexing function. (So use low-order as opposed to higher order of the address) – Since high-order do not change much given locality of references.
L1 cache: Are very small but very fast cache. They are a few thousand entries long, and they respond in 10ps.
Next lecture deals with the cache issues, and if there is a happy middle ground.
Both MVP and MVVM are derivatives of MVC (see timelines and how these have evolved). The key difference between them is the dependency each layer has on other layers as well as how tightly bound they are to each other. See diagram and the references column for more details.
These patterns try to address mainly the problems of structuring the code that relate to 1. Application state, 2. Business Logic and 3. State and View synchronization.
MVP is somewhere in the middle of MVC and MVVM. Also known as Presentation Model pattern
Explanation and flow
A user input like click of a link or a URL results in first interrupt by the controller.
A controller can output different views, based on authorization, error validation, success or custom logic, etc. See many-to-one relationship. Also note one-way communication from controller to the view.
Controller passes the model to the view, and view binds itself using a templating engine (Razor in case of ASP.NETMVC).
Model is usually a data-object POCO (Plain old CLR Object) with minimal to no methods (behavior).
A user input begins with the view and not presenter. View invokes commands on the presenter, and presenter in-turn modifies the View.
View and Model never communicate or know of (refer) each other.
Presenter is a layer of abstraction of the View.
There is always a one-to-one mapping between a presenter and the view.
Presentation Model and View talk to each other. View grabs properties and calls methods on the PM. PM exposes properties and methods for View and dispatches events, which the View may listen to.
PM talks to the Model in the domain layer either through a reference it contains or directly through indirect message.
A user input begins with the view and may end up in executing a ViewModel behavior.
View and Model never communicate or know of (refer) each other.
ViewModel is a strongly-typed model for the view that is an exact reflection (metaphorically speaking) or abstraction of the view.
ViewModel and View are always synced.
Model has no idea that View and ViewModel exists, and ViewModel has no idea that a View exists, which promotes for decoupling scenarios that pay off the dividend.
In C# a reference means that if a class uses the other.
View refers to the model, but not vice-versa.
The controller refers the model, populates it and passes it to the View.
View is oblivious of the controller, but refers and expects a particular type of Model.
Presenter Model needs a reference to the View.
View also has reference to the Presenter which responds to the user events.
Presenter has a reference to the view and it populates the View, as opposed to View binding to the Model for every interaction.
To decouple, there usually is an abstract class or an interface that View and PM share.
Unlike the Presenter, a ViewModel does not need a reference to a view. View binds properties on a ViewModel.
The View has no idea that the model class exists.
The ViewModel and Model are unaware of the View.
Model is completely oblivious to the fact that ViewModel and View exists.
Views are often defined declaratively often using a tool or a designer (think HTML or XAML)
Views are reponsible to generate the markup, typically using a templating engine or a declarative language (HTML). The views may have conditional coding based on the Model property.
Either a different View is used for Edit and Read mode, or same view with conditional logic is used based on model property.
View has to expose an interface that can be used by the presenter.
Presenter implements this interface and provides the required methods defined in the interface.
View uses the interface exposed by the presenter in turn.
The view is declarative and contains the data-binding code that refers to the ViewModel.
There is a two-way bind and view is always synced with the ViewModel.
Examples you may use in views:
Formatting a display field (date string)
Showing only certain details depending on state. (only show edit if admin)
Managing view animations. (on hover, do something)
Controller or Presenter or
Controller or an area is reached through a routing engine which is a set of rules based on the input (URL) or API path in case of AJAX requests.
Controller decides which view has to be displayed, based on user input or current state of the user interaction with the application.
View sends the input through a url, which is interrupted by the routing engine to route to the appropriate controller.
Controller modifies and populates the Model and hands it over to the View.
There is typically an action method in a Controller for each user interaction and its variants.
The code-behind aspx.cs in asp.net represent the presenter – loosely speaking. The interface in this case will be a page class that is inherited by every aspx.cs file.
In the case of composition a Presentation Model may contain one or many child Presentation Model instances, but each child control will also have only one Presentation Model.
ViewModel does not need a reference to the View, which promotes loose-coupling and reuse of the same ViewModel for different views. Imagine, same viewModel used for website, mobile application and tablet application.
A ViewModel encapsulates the current state of the view as displayed on the screen as well as the various commands or behanviors based on events.
A ViewModel may act as an adapter which transforms the raw model data into something that is in the format to be displayed to the user.
Why do we need ViewModels
• Incorporating dropdown lists of lookup data into a related entity
• Master-detail records view
• Pagination: combining actual data and paging information
• Components like a shopping cart or user profile widget
• Dashboards, with multiple sources of disparate data
• Reports, often with aggregate data
The passive view implementation, in which the view contains no logic. The container is for UI controls that are directly manipulated by the presenter.
The supervising controller implementation, in which the view may be responsible for some elements of presentation logic, such as data binding, and has been given a reference to a data source from the domain models.
(This is closer to MVVM)
Models are often received from a service or through a dependency injection interface, which has more or less data presented in a format that caters to a larger consumer base than it maps to our UI needs.
Model object that you receive from the underlying services are raw and in the format that caters to different consumers of the service.
Not the entire model may be used by the view, but just the smaller subset of it.
Typically there is a need to collect different models from services into a single model.
Typically a domain layer object that contains, domain models, commands and subscription service.
Model is typically a server class transformed into a JSON or XML sent over the wire, or for server-side it may be a pre-defined domain class that is more general than what the view requires.
In case of undoable operations a ViewModel can refer to the model to restore the original state.
Perfect for web/HTTP, and accomodating of its stateless nature and addressability.
Disconnected stateless applications.
REST based thin clients as routing is inherent to this pattern.
Mobile applications implemented using HTML5.
Classic Webforms ASP.NET
SmartUI or Rapid App Development.
Windows Forms (WPF)
Migrating from legacy code, where UI logic is already wired up.
Heavy intranet work-flow based applications.
State heavy web applications or views.
Silverlight or Rich Internet Apps.
Windows phone or Android.
Highly event driven and stateful UI.
UI where a user interacts with app for a long time before saving the state.
Works well where connection between the view and rest of the program is not always available.
Patterns and Practices
Think of Spring framework.
Controller is like the Strategy design pattern.
Think ASP.NET aspx pages with a complete Page lifecycle. (Init-Load-Validation-Event-Render-Unload)
Presenter acts as a mediator.
Observer or Publish/Subscribe (INotificationPropertyChanged, IObserver)
ViewModel exposes a Observable.
Backbone.js, knockback.js, Spine.js, angular.js.
WPF (Desktop) or Silverlight
Windows Phone apps (XAML)
Routing is inherent to this pattern and Controller acts as a mediator of presentation (View) and data (Model).
Routing gives the greater control of the application structure and makes it manageable.
The abstractions are properly separated, which enables more control over each layer, especially the view which now is clearly separated from the state.
Separation helps with testability.
The goal of MVP is to separate out the state and behavior out of the View, which makes it easier for legacy sphagetti applications to be migrate to MVP as a first step.
Since Presenter model always is written against an interface, it provides a GUI agnostic testable interface.
Imposes a consistent interface pattern that developers can follow.
Attempts to clearly separate the declarative UI with the business logic.
Promote parallel development, where UI developers write the binding and the model and viewModel are owned by application developers.
Clearly separates the view logic and makes it dumber with least amount of logic.
In practice a website, mobile application and tablet application all need different views, but can share the same viewModel.
ViewModel is easier to unit test than event driven code, and leaves the issues of UI automation testing out of the way.
ViewModel can be re-used for different representations as it is highly decoupled from the View.
If the model data is coming from the backend, it typically needs some sort of transformation like converting an enum to string, or as complex as calculating number of days from different data property of the model. Slowly the view starts holding more and more logic.
Mechanisms like ViewBag/ViewData exist which are abused to substitute the actual need for model, when model size is not large.
In practice the Model from the back-end repository is not useable due to different property names or data structure format. A new abstraction of the Model is created and often a pain to map this new ViewModel to the Model and manage changes.
The design pattern seems to work against the constraints of the HTTP web, as it demands heavier bandwidth which is not free or unlimited.
The view is still tightly coupled compared to MVC.
Debugging the events fired from the UI is harder due to its intermingling with the View.
It is hard to stick to one of the variants of MVP in all cases, resulting in a mixed code-base.
Cannot always be done in parallel as the interfaces need to be defined and agreed upon first.
For simpler application it is an overkill.
As opposed to MVC, the declarative bindings in MVVM make it harder to debug.
Data-binding on simpler controls are more code than data itself.
Data-binding implementation keep a lot of in-memory book-keeping.
Does View development drive ViewModel or vice versa, makes it harder to communicate.
Sometimes criticized as markup and JS code (the data-bindings) are inter-mized. Data-binding un-managed can consume considerable memory.
John Gossman points out that generalizing Views for a larger application becomes more difficult.
ViewModel is a class that is not a POCO or POJO, but its still worth the effort.
Designed by Trygve Reenskaug in 1979 during his time working on Smalltalk-80 at Xerox-PARC. The definition has evolved heavily during following years.
Proposed by Mike Potel from Taligent Inc in 1996. It’s a subsidiary of IBM.
Defined by John Gossman at Microsoft in 2005 for use with Windows Presentation Foundation.
1. URL outlive the technology and the underlying stack. (So — aspx is not good enough, or .cgi or .php). Leave these for MIME type in HTTP header, that’s where it really matters.
2. URLs are used by the search engines, the file extensions like.aspx or .php do not add any value to search. They are not keywords.
3. People bookmark pages, people pass the bookmarks around (URLs do end up in bill boards — and pamphlets) — Don’t use your bit.ly card now, those URL make no sense to me.
4. Use safe characters like underscore on resources (_) to make URLs more readable.
5. URL doesn’t restrict someone to use a particular language or technology. (It can be specified in the content-negotiation of the http header)
- Content negotiation does not have to be just on representation format, but can also be on the language.
Browser looks at the request and says hey, I think I have this resource, but I don’t know if it’s the latest and most upto date.
Browser then appends the header called as If-Modified-Since
Example of 304 response — that says that the resource was not modified, and doesn’t send the entire representation.
- Accept-Encoding header is a way client advertises what sort of encoding can it understand and interprete. (gzip) compression details are abstracted away by HTTP, but severs can be smart in making it possible.
- Persistent (save overhead) versus Parallel (improve speed) connection (needs to be balanced)
Public caches — internet service providers or for a company or university.
Private caches — Are for a single user. Internet Explorer — type about:cache in chrome.
Rules of maintaining up-to-date cache is a bit complex.
Always cache the safe request. Always cache a GET request.
Server (which is the source) can influence cache setting using the headers. Cache-control or Expires header or Pragma. Value of public and private and no-cache.
I remember I was asked in an interview — a very general and common question.
How does an ajax request work?
I also knew the latest version 2 and the difference between ActiveXObject or XDomainRequest for IE and XmlHttpRequest for rest of the world, and some history as it relates to the browser wars of the ages and the maverick standards implementations. Although knowing these sound impressive, I was still not able to get into another level of detail. The lower in abstraction you dig, the more esoteric the knowledge gets, and is a good candidate topic to discuss and evaluate how much you really know your stuff.
Publish/Subscribe is a common design pattern seen and used in more places than you can imagine. While mainly used for event-driven patterns, this is a level-up in terms of decoupling systems than the Observer pattern. What is unique about publish/subscribe is that there is a message or token or sometime called as topic involved. The only common thing between a publisher and subscriber is the token. The publisher do not have any reference to the subscriber and vice-versa.
Node.js is a event-based loop server that has an infinite loop that listens to messages in the queue and executes them in order of arrival. In this case it will be the HTTP request.
Event Loops and Message Queues
Have you ever written a Windows Console application or anything alike? If yes, then you know that once the Main() is executed the program ends. What if you want to extend this Console application to be able to accept requests ANYTIME as for example a web server or something alike? It should accept request at anytime and process it. Or remember how console can wait until a user input is entered using a ReadLine? One way to do this is to write an infinite loop with an exit condition. For example if I pass a token “end” as a string then loop should stop. Otherwise, for any other token, process it. Let’s say there are 1000 requests — now how do you handle it? Best way is to queue each request, and loop will process each request one by one.
The event token or event messages are the same tokens that are used in publish subscribe pattern. The publisher is the event loop which executes, and subscribers are the events attached to that token, for example a button click.
Browsers allow up to 6–8 number of HTTP requests per window per domain (this used to be 2 before 2008). Depending on the implementation of the browser, each request is made by different threads and these threads queue the response to the main event loop.
How does this question apply to asynchrony in general? An asynchronous call may launch another thread to do the work, or it might post a message into a queue on another, already running thread. The caller continues and the callee calls back once it processes the message.
For example, lets say you want to do an auto-complete on keypress. There is only one way to hook to the user typing in the text-box, key-down, key-up or keypress. Firstly you don’t want your client function to be called each 1/10th of a millisecond. Even if you let it, the IO is probably going to take longer. Moreoever you don’t want to barrage your web server for a request with a single alphabet typed in. You want the user to complete the word, and even if not wait for may be 500ms to fire an autocomplete event.
Let’ say you do not debounce and attach a keypress with a callback that takes up a lot of time. The browser will prudently run your function, but the user will feel that the website is stuck for as long as your long running job is finished.
Scrolling is another example, where you would want the user to complete the scroll before you re-position your elements, or call do something similar.
There are two ways to deal with same event triggered multiple times within a short time span.
1. You delay the function call by x-millisecond, every time a new one comes in (DEBOUNCE)
2. You can delay the function for the x-millseconds window (no-matter how many times its called) and once that’s up, trigger the function call. (THROTTLE)
There is a very good elevator analogy provided that clearly distinguishes the difference.
Debounce: Delay the elevator every time a person shows up.
Throttle: Timed limit of 10 minutes on a subway ride. Doors close no matter what (ignore the complexity of sensors delaying).
On a very high level — there are certain characteristics about tree problems that pop-up very often.
They pop-up in real world scenarios. For example a restricted BFS will give you the LinkedIn like degree of connection. Doing a full BFS for each connection is a bit pricey, but doing it to a 2nd or 3rd degree is not that bad.
Now that I’ve emphasized enough about why you should know characteristics — here is some general guideline.
/ \ / \
D E F G
BFS (uses Queues) and results in level order (A | B C | D E F G)
DFS (uses recursion stack) and results in (A | B D E | C F G)
Pre-order (type of DFS) and results in (A | B D E | C F G)
In-order (type of DFS) and results in (D B E | A | F C G)
Post-order (type of DFS) and results in (D E B | F G C | A)
Post-order and Pre-order can generate arithmetic sequences that are not ambiguous to a computer.
For a very large tree DFS will eat up the recursive stack space, so a BFS may be useful.
BFS is also memory intensive in that it uses a queue, although, walks nearest neighbors first.
In-order traversal is useful for BST’s, and human readable arithmetic sequence.
Class of Problems:
DFS traversal with height.
Get the height (implied) maximum (or minimum) height of the tree.
Is the tree balanced?
Is the tree symmetric?
Calculate the diameter of the tree
Is T1 and subtree of T2 (without loss of generality)?
Is mirror of a binary tree?
Print the cover of a binary tree?
Print the right view of the binary tree?
DFS Order Traversal
In-order (recusive and iterative). There is a way to do in-order traversal without a system or application stack — called Morris traversal.
Pre-order traversal (recursive and iterative) using one stack.
Post-order traversal (recursive and iterative) using two-stacks.
Path sum (DFS)
All paths that sum up to a value
Maximum path sum in the entire tree.
** There is only one path from any node to any node in a tree. Hence it is not a graph, but a tree. This also includes one path from root to any other node.
BFS traversal — level order
Print a binary tree by level order.
Convert a binary tree into a Linked List by level order.
Are two nodes cousins in a binary tree. (Different parents, same level)
Reverse nodes in alternate levels of a binary tree.
** Understand that level order sequence can also be achieved using DFS, although it is more intensive, even thought it may use less space. ** Understand that any point in time, the number of elements in the Queue for a BFS is — all the leaves, as it is level with most nodes. ** Know at what point the level gets changes while traversing
Serialize and de-serialize
Serialize a binary tree using a sentinel and then deseralize it.
Reconstruct a tree given Inorder and Preorder traversal.
Least common ancestor of two nodes.
Least common ancestor of two nodes given a parent pointer.
When a noob starts coding the fight is to actually build solutions and come up with a working code. Even a moderate level of coding task seems to take up most of the minds processing power in syntax and modeling and implementing the algorithm bug-free. There is less time and brain energy left for code-cleanliness, refactoring and making it extensible the first time around.
Code-cleanliness comes with practice and years of experience. It becomes your second nature to start coding your ideas, you already have an approach in mind and you start typing without much scribble.
In a real world scenario, teams with tighter budget or even at startup (when they start growing) — the code gets rampant pretty quickly. With the culture of lot of turn over or attrition in teams, these issues proliferate. Iterations over iteration of bug fixes with the tight deadlines and spitting out new features, can make the code quality degrade pretty quickly. In my experience, I have written code from ground-up (first line of code) to improving and adding 8-year old code. In green-field coding as our features grew in number and need for squeezing out efficiency out of system increased - I had to refactor and refactor and refactor what I just refactored over and over again. When you want to sail over a pond, a kayak is enough — as you grow out of the pond, only then do you need a boat. If you’re Facebook, Twitter or Google — you then need to build a ship. Yet any projects at these companies never started by building a ship, since the first line-of-code. Only incremental refactoring and right balance of code quality can get to further.It is like maintaining your car, if you want it to run faster and for longer, keep getting it serviced, else carburetor will be clogged and tires will wear (even though — you’re still getting mileage from it).
The degradation of code adds to the developer efficiency. The time to add new feature to a degraded codebase is high and chances of bug-free feature output are low. When the developer efficiency goes down, you find developers sitting late in the office, working weekends, only to be unable to express his concerns to program managers or leads. Every now and then the developer needs to stand up and say — thats it, this needs attention.
Incremental refactoring is one of the ways to start charity at home, but it comes with its own risks and that relates to the team dynamics and priorities of the team. No customer will clap at you, if you refactored some code. May be — if there was an efficiency gain, but does the product really need it at this point in time? May be you will be praised when you leave the team, or no one will know, not that it should matter. Suppose you introduced a bug due to refactoring, then another can of worms start opening. It is always better to communicate the need for refactoring and why your estimates are larger than they could be, must also be communicated.
I will share some common scenarios that I run into with refactoring of client-side as well as server-side code. Number of times I find myself, put into a team or as time passes within the current team, things start to degrade.
Following are the issues:
- Make module more resilient (to deal with other cases)
- Make it extensible (one-off/two-off is not enough)
- Generalize the code to accomodate more scenarios
- Make this to scale (my making it async or parallelization)
- Module is using the pattern in a wrong way.
- Its a spaghetti — with cross dependencies
I find myself constanly, reducing the rampantly growing CSS files into different sections and now different files using LESS. One UI bug fixes after the other, the code quality degrades very badly. It has to stop at some point. One of my biggest pet peeves is to find two styles with same selector in the same file. You fix the bug at one place and the other is overwriting your fix.
For CSS and JS, there typically is a one-t0-one mapping. This happens because, JS modifies CSS or styles on the page often than you would want to escape.Each widget and each UI module should have separate folder structure, file naming conventions and decoupling from other packages. This makes it simpler to reuse these modules without any issues.
I have done it often times using technologies like LESS, GRUNT, jQueryUI widget factory and requireJS.
Refactoring C# code for quality and performance
In one of my gigs, we were a SharePoint Gold partner who wanted to make money off of customers who have SharePoint. Trust me most of fortune 500’s have their intellectual property and a major collaboration tool as SharePoint. With the advent of cloud computing, they introduced Office365 and Microsoft started to promote the cloud instances of SharePoint as opposed to on-premise for these big customers. Our code would sit on top of Microsoft SharePoint stack and they opened channels for partners like us to run the code on the cloud within their hosted SharePoint environment. You bet, since they own the server and the uptime they would have strict rules for the code.
Typically in a large code base the compiler warnings are ignored, mostly because of its low priority. I rarely find projects with zero warnings during the build. It is very difficult to achieve that and its a moving target. Yet, there are teams who start (and they should) with that level of code-cleanliness. Its a developers paradise, and I would certainly want to live there. And then there is reality, deadlines, new people and off-shore teams collaborating.
Without digressing much, Microsoft only allowed our code to run if it was compliant with zero warnings (and degree of tolerance) through setting some level of threshold to suppress minor warnings.
MSOCAF.NET = CAT.NET + FxCop + SharePoint API rules.
Making this async (to make it non-blocking) and squeezing out efficiency.
Using IDisposable to Dispose the object properly, with fallback to finalization.
Implementing the Event pattern that C# provides properly, by only passing a type that implements EventArgs.
Making sure AntiXSS is avoided by encoding the user inputs.
No reflection (its pretty costly — avoid it at all costs)
Using the efficient versions of API through proper looping.
Avoiding deprecated features of the language.
Storing passwords in protected string class.
Protecting the native code modules.
Removing unused variables.
There are a lot of rules, that one cannot remember while coding since you are focusing on solving the problem. Static code analysis tools help with code cleanliness. For us it was a requirement, but a right-balance should be maintained, between the engineering and business goals. You should communicate by making everyone aware.
In summary, refactor — but be aware of the prioritize. Make sure you raise your voice when you find a trouble spot of code quality degrading. Always keep the right balanced, don’t be too obsessed with the implementation and be a cowboy to keep adding to regression. Just maintain the right balance and communicate.