Tuesday, October 15, 2019

Accelerate Chapter 6 Discussion Points

Chapter 6 of Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations, is about integrating infosec into the delivery lifecycle:
  • Infosec is vitally important, however, infosec teams:
    •  Are often poorly staffed
    • Are usually only involved at the end of the software delivery lifecycle
  • Furthermore, many developers are ignorant of common security risks and how to prevent them
  • Building security into software development improves both delivery performance and security quality
  • Shifting left on security
    • When teams build information security into the software delivery process instead of making it a separate phase, team's ability to practice continuous delivery is positively impacted
    • What does "shifting left" entail?
      • Security reviews are conducted for all major features, and this review process is performed in such a way that it doesn't slow down the development process
      • Infosec experts should:
        • Contribute to the process of designing applications
        • Attend and provide feedback on demonstrations of the software
        • Ensure that security features are tested as part of the automated test suite
        • Make it easy for developers to do the right things in terms of infosec
    • We see a shift from infosec teams doing the security reviews themselves to giving the developers the means to build security in
  • The rugged movement
    • Rugged software should be resilient in the face of security attacks and threats

Thursday, October 10, 2019

Accelerate Chapter 5 Discussion Points

Chapter 5 of Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations, covers architecture and its impact on continuous delivery practices:

  • The architecture of your software and the services it depends on can be a significant barrier to increasing both the tempo and stability of the release process and the systems delivered
  • Can DevOps and continuous delivery be applied to systems other than web based, such as mainframe, firmware, etc.?
  • High performance is possible with all kinds of systems, provided that systems and teams are loosely coupled
  • Types of systems and delivery performance
    • Software being built corresponded with low performers in only two situations:
      • Software being built, or services to interact with, was custom software developed by another company (e.g., an outsourcing partner)
        • Underlines the importance of bringing this capability in house
      • Mainframe systems
    • Outside of these two cases, there was no significant correlation between software being built and delivery performance
  • Focus on deployability and testability
    • There are two architectural characteristics that are important to achieving high performance:
      • We can do most of our testing without requiring an integrated environment
      • We can and do deploy or release our application independently of other applications/services it depends on
    • To achieve these characteristics, design systems are loosely coupled -- that is, can be changed and validated independently of each other.
    • According to the 2017 analysis, the biggest contributor to continuous delivery was when teams can:
      • Make large-scale changes to the design of their system without the permission of somebody outside the team
      • Make large-scale changes to the design of their system without depending on other teams to make changes in their systems or creating significant work for other teams
      • Complete their work without communicating and coordinating with people outside their team
      • Deploy and release their product or service on demand, regardless of other services it depends upon
      • Do most of their testing on demand, without requiring an integrated test environment
      • Perform deployments during normal business hours with negligible downtime
    • Organizations should evolve their team and organizational structure to achieve the desired architecture
    • The goal is for your architecture to support the ability of teams to get their work done without requiring high-bandwidth communication between teams
    • This doesn't mean that teams shouldn't work together, but it's rather: 
      • To ensure that the available communication bandwidth isn't overwhelmed by fine-grained decision-making at the implementation level
      • So we can instead use that bandwidth for discussing higher-level shared goals and how to achieve them
  • A loosely coupled architecture enables scaling
    • If we achieve a loosely coupled, well-encapsulated architecture with a matching organizational structure:
      • We can achieve better delivery performance, increasing tempo and stability while reducing burnout and pain of deployment
      • We can substantially grow the size of our engineering organization and substantially increase productivity as we do so
    • This is based off the measurement of number of deploys per day per developer
    • As the number of developers increases:
      • Low performers deploy with decreasing frequency
      • Medium performers deploy at a constant frequency
      • High performers deploy at a significantly increasing frequency
  • Allow teams to choose their own tools
    • There is a downside to lack of flexibility: it prevents teams from 
      • Choosing technologies that will be most suitable for their particular needs
      • Experimenting with new approaches and paradigms to solve their problems
    • When teams can decide which tools they use, it contributes to software delivery performance and, in turn, to organizational performance
    • There is a place for standardization, particularly around the architecture and configuration of infrastructure
    • Teams that build security into their work do better at continuous delivery
      • A key element of this is ensuring that information security teams make preapproved, easy-to-consume libraries, packages, toolchains, and processes available
    • When tools provided actually make life easier for the engineers who use them, they will adopt them of their own free will
      • This is a much better approach than forcing them to use tools that have been chosen for the convenience of other stakeholders
      • A focus on usability and customer satisfaction is just as important when building tools for internal customers as it is for external customers
      • Allowing your engineers to choose whether or not to use them ensures that we keep ourselves honest in this respect
  • Architects should focus on engineers and outcomes, not tools or technologies
    • What tools or technologies you use is irrelevant if the people who must use them hate using them, or if they don't achieve the outcomes and enable the behaviors that we care about
    • What is important is enabling teams to make changes to their products or services without depending on other teams or systems

Thursday, October 3, 2019

Accelerate Chapter 4 Discussion Points

Moving on to chapter 4 of Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations, we begin to cover technical practices:

  • Many Agile adoptions have treated technical practices as secondary compared to the management and team practice, but research shows that technical practices play a vital role
  • Continuous delivery practices have a measurable impact on software delivery performance, organizational culture, and other outcome measures
  • What is continuous delivery?
    • A set of capabilities that enable us to get changes to production safely, quickly and sustainably
    • Five key principles of continuous delivery:
      • Build quality in
        • We invest in building a culture supported by tools and people where we can detect issues quickly, so that they can be fixed straight away when they are cheap to detect and resolve
      • Work in small batches
        • By splitting work up into much smaller chunks that deliver measurable business outcomes quickly, we get essential feedback so that we can course correct
      • Computers perform repetitive tasks; people solve problems
        • Reduce cost of pushing out changes by taking repetitive work that takes a long time (regression testing, software deployments, etc.) and invest in simplifying and automating this work
        • Thus we free up people for higher-value problem-solving work
      • Relentlessly pursue continuous improvement
        • High performing teams are never satisfied: they always strive to get better
      • Make the state of system-level outcomes transparent
        • System-level outcomes can only be achieved by close collaboration between everyone involved in the software delivery process
    • In order to implement continuous delivery, we must create the following foundations:
      • Comprehensive configuration management
        • It should be possible to provision our environments, build, test, and deploy in a fully automated fashion purely from version control info
      • Continuous integration
        • Following principles of small batches and building quality in, high-performing teams keep branches short-lived (less than one day's work) and integrate them into trunk/master frequently
      • Continuous testing
        • Because testing is so essential, we should be doing it all the time as an integral part of the development process
        • Automated unit and acceptance tests should be run against every commit
        • Developers should be able to run all automated tests locally in order to triage and fix defects
        • Testers should be performing exploratory testing continuously against the latest builds to come out of CI
        • No one should be saying they are "done" with any work until all relevant automated tests have been written and are passing
  • The impact of continuous delivery
    • Strong impact on software delivery performance
    • Helps to decrease deployment pain and team burnout
    • Teams identify more strongly with the organization they work for
    • Improves culture
    • Lower change fail rates
  • Drivers of continuous delivery
    • Version control
    • Deployment automation
    • Continuous integration
    • Trunk-based development
    • Test automation
    • Test data management
    • Shift left on security
    • Loosely coupled architecture
    • Empowered teams
    • Monitoring
    • Proactive notification
  • The impact of continuous delivery on quality
    • Less time spent on rework or unplanned work
      • Unplanned work: different between "paying attention to the low fuel warning light on an automobile versus running out of gas on the highway"
  • Continuous delivery practices: What works and what doesn't
    • Nine key capabilities that drive continuous delivery
      • Version control
        • What was most interesting was that keeping system and application configuration in version control was more highly correlated with software delivery performance than keeping application code in version control
        • Configuration is normally considered a secondary concern to application code in configuration management, but our research shows that this is a misconception
      • Test automation
        • The following practices predict IT performance:
          • Having automated tests that are reliable
          • Developers primarily create and maintain acceptance tests, and they can easily reproduce and fix them on their development workstations
        • None of this means getting rid of testers
        • Testers perform exploratory, usability, and acceptance testing, and help to create and evolve suites of automated tests by working with developers
      • Test data management
        • Successful teams had adequate test data to run their fully automated test suites and could acquire test data for running automated tests on demand
        • Test data was not a limit on the automated tests they could run
      • Trunk-based development
        • Developing off trunk/master rather than long-lived feature branches was correlated with higher delivery performance
        • Teams that did well:
          • Had fewer than three active branches at any time
          • Their branches had very short lifetimes (less than a day)
          • Never had "code freeze" or stabilization periods
        • These results are independent of team size, organization size, or industry
        • We hypothesize that having multiple long-lived branches discourages both refactoring and intrateam communication
        • Note that open source projects whose contributors are not working on a project full time works differently
      • Information security
        • On high performing teams, the infosec personnel provided feedback at every step of the software delivery lifecycle, from design through demos to helping with test automation
      • Adopting continuous delivery
        • Continous delivery improves both delivery performance and quality, and also helps improve culture and reduce burnout and deployment pain
      • (Here it lists six, and earlier it listed eleven; So what are the nine?)

Thursday, September 26, 2019

Accelerate Chapter 3 Discussion Points

Chapter 3 of Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations covered these points:

  • Culture is of huge importance, but is intangible
  • Needed to find a model of culture that:
    • Was well-defined in scientific literature
    • Could be measured effectively
    • Would have predictive power in our domain
  • It is possible to influence and improve culture by implementing DevOps practices
  • Modeling and measuring culture
    • Organizational culture can exist at three levels
      • basic assumptions
        • Formed over time as members of a group or organization make sense of relationships, events, and activities
        • Least "visible" of the levels
        • Things we just "know"
        • Hard to articulate
      • values
        • Provide a lens through which group members view and interpret the relationships, events, and activities around them
        • More "visible"
        • Can be discussed and even debated by those who are aware of them
        • Quite often the "culture" we think of when we talk about the culture of a team and organization
      • artifacts
        • Most "visible"
        • Can include written mission statements or creeds, technology, formal procedures, or even heroes and rituals
    • Westrum's organizational cultures
      • Pathological (power-oriented)
        • Characterized by large amounts of fear and threat
        • People often hoard information or withhold it for political reasons, or distort it to make themselves look better
      • Bureaucratic (rule-oriented)
        • Protect departments
        • Those in the department want to maintain "turf", insist on their own rules
      • Generative (performance-oriented)
        • Focus on the mission
        • Everything is subordinated to good performance
        • People collaborate more effectively
        • Higher level of trust
    • Organizational culture predicts the way information flows through an organization
    • Good information
      • provides answers to the questions that the receiver needs answered
      • is timely
      • is presented in such a way that it can be effectively used by the receiver
  • Measuring culture
    • Use Likert scale with strongly worded statements
    • Determine if measure is valid from a statistical point of view
    • Discriminant validity, convergent validity, and reliability
  • What does Westrum organizational culture predict?
    • Organizations with better information flow function more effectively
    • Better culture leads to better software delivery performance and organizational performance
  • Consequences of Westrum's theory for technology organizations
    • Both resilience and the ability to innovate through responding to change are essential
    • Who is on a team matters less than how the team members interact, structure their work, and view their contributions
    • In the case of failure, our goals should be
      • To discover how we could improve information flow so that people have better or more timely information, or
      • To find better tools to help prevent catastrophic failures following apparently mundane operations
  • How do we change culture?
    • The way to change culture is not to first change how people think, but instead to start by changing how people behave -- what they do
    • Lean management and continuous deliver impact culture
    • You can act your way to a better culture by implementing these practices in tech organizations

Wednesday, September 25, 2019

Accelerate Chapter 2 Discussion Points

Moving on to chapter 2 of Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations, here's a list of bullet points for discussion:
  • We wanted to discover what works and what doesn't in a scientific way, starting with a definition of what "good" means in this context
  • Measuring performance in the domain of software is hard
  • The flaws in previous attempts to measure performance
    • Many other measures in general suffer from two drawbacks:
      • They focus on outputs rather than outcomes
      • They focus on individual or local measurements rather than team or global ones
    • Three examples:
      • Lines of code
        • Rewarding developers for writing lines of code leads to:
          • Bloated software
          • higher maintenance costs
          • higher cost of change
        • Minimizing lines of code isn't ideal, either
          • In the extreme, leads cryptic code that would be clearer if written with more lines
      • Velocity
        • Velocity is designed to be used as a capacity planning tool
        • However, some managers have also used it as a way to measure team productivity, or even compare teams, which has several flaws:
          • Velocity is a relative and team dependent metric, making them incomparable
          • When used as a productivity measure, team inevitable game their velocity
          • This can lead to inflated estimates and being uncooperative with other teams
      • Utilization
        • High utilization is only good up to a point
        • Once utilization gets above a certain level, there is no spare capacity (or "slack") for unplanned work, changes to the work, or improvement work
        • This results in longer lead times to complete work
  • Measuring software delivery performance
    • A successful measure of performance should:
      • Focus on a global outcome to ensure that teams aren't pitted against each other
      • Focus on outcomes, not output (shouldn't reward people for large amounts of busywork that doesn't achieve organizational goals)
    • Four measures of delivery performance:
      • Delivery lead time
        • Time it takes to go from a customer making a request to the request being satisfied
        • When we need to satisfy multiple customers in potentially unanticipated ways, the lead time has two parts:
          • The time it takes to design and validate a product or feature
            • high variability ("fuzzy front end")
          • The time to deliver the feature to customers
            • implemented, tested, and delivered
            • easier to measure and lower variability
        • Shorter product delivery lead times:
          • enable faster feedback
          • allow us to course correct more rapidly
          • allow better responsiveness to defects or outages
        • Measured as time to go from code committed to code successfully running in production
          • Point to consider: How does code deployed to production behind a feature toggle count? Does the toggle need to be turned on for it to count?
      • Deployment frequency
        • Closely tied to batch size, but batch size is difficult to measure, and deployment frequency is easy to measure
        • Reducing batch sizes:
          • Reduces cycle times and variability in flow
          • Accelerates feedback
          • Reduces risk and overhead
          • Improves efficiency
          • Increases motivation and urgency
          • Reduces costs and schedule growth
        • Measured as software deployment to production or to an app store
      • Time to restore service
        • It is important that as performance improves, that it doesn't come at the expense of stability
        • Traditionally reliability is measured as time between failures, but with complex software systems, failure is inevitable
        • So the question then becomes: How quickly can service be restored?
      • Change fail rate
        • What percentage of changes for the primary application or service they work on:
          • Result in degraded service, or
          • Subsequently require remediation
            • Lead to service impair or outage
            • Require a hotfix, a rollback, a fix-forward, or a patch
    • The research shows that high performers do well on all four points, and low performers do poorly on all four points
  • The next question is: Does software delivery performance matter?
  • The impact of delivery performance on organizational performance
    • The research shows that high performance organizations were twice as likely to exceed goals in profitability, market share, and productivity than low performers

Wednesday, September 18, 2019

Accelerate Chapter 1 Discussion Points

We've recently started the book Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations (found here). This is a list of discussion points for chapter 1.


  • "Business as usual" is no longer enough to remain competitive
  • In order to delight customers and rapidly deliver value to organizations:
    • Use small teams
    • Work in short cycles
    • Measure feedback from users
  • DevOps movement: how to build secure, resilient, rapidly evolving distributed systems at scale
  • Focus on capabilities, not maturity
    • The key to successful change is measuring and understanding the right things with a focus on capabilities -- not on maturity
    • Maturity model:
      • Focus on "arriving" at a mature state and being done
      • "Lock-step" or linear, prescribing the same thing to all situations
      • Simply measures technical proficiency or tooling install base
      • Defines a static level to achieve
    • Capability model:
      • Focus on continual improvement in an ever changing landscape
      • Customized approach to improvement, with a focus on capabilities of most benefit
      • Focus on key outcomes and how capabilities drive improvement
      • Allow for dynamically changing environments and focus on remaining competitive
  • Evidence-based transformations focus on key capabilities
    • There are disagreements on which capabilities to focus on
    • A more guided, evidence-based solution is needed, which this book aims to show
  • The value of adopting DevOps:
    • The high performers have:
      • 46 times more frequent code deployments
      • 440 times faster lead time from commit to deploy
      • 170 times faster mean time to recover from downtime
      • 5 times lower change failure rate (1/5 as likely for a change to fail)
  • High performers understand that they don't have to trade speed for stability or vice versa, because by building quality in they get both

Thursday, September 5, 2019

Dependency Injection Sans Reflection in Kotlin

A few weeks back we implemented a very basic dependency injection container in Kotlin using reflection (see here). But here's something cool about Kotlin: it's powerful and flexible enough to allow for a pretty solid dependency injection experience without even pulling out reflection or annotation processing. Check this out:
fun main() {
  val dep4 = Injector.dep4
  println(dep4)
}
​
object Injector {
  val dep4 by lazy { Dep4() }
  val dep1 by lazy { Dep1() }
  val dep3 by lazy { Dep3() }
  val dep2 by lazy { Dep2() }
}
​
class Dep1
data class Dep2(
  val dep1: Dep1 = Injector.dep1)
data class Dep3(
  val dep1: Dep1 = Injector.dep1,
  val dep2: Dep2 = Injector.dep2)
data class Dep4(
  val dep3: Dep3 = Injector.dep3)
And we could take this one step further, and allow for mocks to be injected for integration tests:
// Here's the implementation
​
fun main() = run()
// Running this main method will print this to the console:
// Dep4(dep3=Dep3(dep1=Dep1@610694f1, dep2=Dep2(dep1=Dep1@610694f1)))
​
fun run(injectorOverride: Injector? = null) {
  injectorOverride?.let {
    injector = it
  }
  val dep4 = inject().dep4
  println(dep4)
}
​
open class Injector {
  open val dep4 by lazy { Dep4() }
  open val dep1 by lazy { Dep1() }
  open val dep3 by lazy { Dep3() }
  open val dep2 by lazy { Dep2() }
}
​
private lateinit var injector: Injector
fun inject(): Injector {
  if (!::injector.isInitialized) {
    injector = Injector()
  }
  return injector
}
​
class Dep1
data class Dep2(
  val dep1: Dep1 = inject().dep1)
data class Dep3(
  val dep1: Dep1 = inject().dep1,
  val dep2: Dep2 = inject().dep2)
data class Dep4(
  val dep3: Dep3 = inject().dep3)
​
// Here's some hypothetical test code
​
import io.mockk.mockk
​
fun main() = run(TestInjector())
// Running this main method will print this to the console:
// Dep4(dep3=Dep3(dep1=Dep1@72c28d64, dep2=Dep2(#1)))
​
class TestInjector: Injector() {
  override val dep2 by lazy { mockk() }
}
This could of course be further improved upon, but it shows that in not all that many lines of code, we've got a pretty solid dependency injection setup.