Summer Program in Google Apps Brings Texas Businesses, Students and Cloud Tech Together

apps-admin-9Students in Texas planning to attend college in the Austin area are invited to apply for a paid summer internship as a Google Apps Administrator. For the fourth year in a row Coolhead Tech will teach and train local college students in Google Apps Administration.  A selected student receives a scholarship for Apps Admin classes and a weekly stipend for Google Apps work-study with local businesses.   Successful students are invited to work throughout the fall semester and take the Google Apps Certified Administrator Exam offered by Google.

College Student Success Stories Born in the Cloud.

Aaron Wilson, a graduate of The University of Texas, is one of these Coolhead Tech internship success stories from 2013. At Coolhead, he developed scripts in “Python .cgi”  to access the Google Apps Reseller console with the Google API (application programming interface). When Coolhead was in a pinch, Aaron also helped with web development, Javascript and deployments. Now he’s embarked on a career as a programmer in New York City, armed with the confidence and skills he earned at UT and at his Coolhead Tech internship in Austin.

aaron-at-chicon-collective“I had taken all of the tech courses provided for me in high school but never any computer science.” But he realized that was where his interests lay, so he looked at his options. He was reluctant to pursue a programming internship because he didn’t think he had what it took. He says taking “menial jobs” instead was a mistake.

Google Certification and Real World Validation

Brian Haley grew up in Waco, Texas and became a Summer Intern in Apps Admin while he was a student at Austin Community College between 2013 and 2014.  Brian continued with the program working with Austin’s leading custom home builder, Gossett Jones Homes.  At GJHLife Brian specialized in Google Drive for Work deployment and Administration.  Now Brian specializes in both Drive for Work and Google Apps Script, writing custom Google Apps automations for Google customers around the world.

Shortly after it was introduced, Brian was among the first to earn Google Certification in Apps Administration.   In 2014 Brian earned a paid trip to Googleplex – he trained directly at Google’s headquarters in California.  Now he’s considering a position in Washington D.C. as the Google Apps Certified Administrator for a Presidential Campaign. 

Apps Admin is S.T.E.M. and then some.

Brittni was born and raised in Corpus Christi, Texas, the “City by the Bay”—although she calls it more of a “sleepy town” by the bay. Growing up, Brittni was always writing stories, usually fiction and science fiction. She had a passion for writing. That’s why she went into journalism and advertising when she headed off to college.

“Bobcat Country” was calling to her, but she was on the fence about what to do. She spoke with one of her professors, who suggested she go with what she truly wanted.

“It was then when I realized that this was my future I was determining. My answer was Texas State—the best decision I could make. It was first step to somewhere greater.”

coolhead-internsWhen the time came for her to get an internship, she attended the fall 2012 internship fair at Texas State in San Marcos run by Campus2Careers. She gathered her freshly-printed resumes, nervous and excited and not knowing what to expect.

Brittni began working at Coolhead Tech, writing blogs about Google Apps and other business apps and running the social media campaigns.

“The thing I loved about interning with Coolhead Tech is that I was learning while earning a weekly stipend. I began to take HubSpot classes, learning more about inbound marketing. Learning how to effectively use HubSpot to connect clients is the coolest thing I learned while interning. It’s all about strategy, and it is a great feeling to see the business reap the rewards from the work and ideas you put in.”

Leading the Way to Process Tansformation in 2015

Many Google Apps customers spend the majority of their initial time and effort during deployment ensuring that users are functional using Google messaging tools such as Mail and Calendar. While this is important work, replacing one email system for another can’t really be considered ‘transformational’. The real opportunity to drive process, business and cultural transformation within an organisation will come when employees use the full power of Google Apps and identify new ways of collaborating internally and with customers.  Interns help bridge the age gap in technology by bringing fresh ideas to the workplace.

apps-admin-class-in-austinWe met Mr. Gray, Google Apps Certified Administrator for St. Stephen’s Episcopal School in West Austin, at a sponsored event 2013 and we’ve been working together to deliver Apps Admin courses to prepare students for Google’s Certification Exam.  Brian teaches the summer Google Apps Certified Administrator course.  Brian is also a teacher over at St. Stephen’s and has been a member of the faculty there for the past ten years.

Apps Admin Grads and Interns from Texas State like Najib Momin and Jules Kabamba are providing leadership in business transformation by inspiring and building solutions with business context.

Today the summer internship program for Google Apps brings business, education, people, processes, and technology together, transforming the way we work. Students participate and lead this 1/2 Day guided workshop.   Teams look at current business processes, brainstorm and build prototypes using Google for Work and Cloud solutions, and walk away with a roadmap to long-term innovation.

Students support the business transformation and growth throughout the school year as Google Certified Administrators, Apps Script Programmers, Corporate Use Trainers and Consultants. As Google Apps Administrators interns support businesses from fifty to five hundred.

Now is the time for you to decide what priority process transformation should be for your organization.    Move from transition to transformation and support student interns working towards Google Apps Certification. 

Solution Showcase: Do More with Google for Education and Coolhead Tech


Back to Business

Atlantis – Our First Underwater Datacenter

We’re very excited to be announcing a new region: Atlantis (Datacenter Abbreviation: H2O), submerged in the Straits of Gibraltar. This underwater datacenter will provide unparalleled connectivity to the surrounding countries like Spain, Portugal, Morocco, Algeria, and Tunisia.

While we are still actively building out our German datacenter, we wanted to investigate the money-saving possibilities of underwater datacenter cooling. Our investigation was a great success: not only were we able to reduce our electricity costs by 35%, but we discovered our high-density SSD storage was even more dense at 87atm! Despite dramatically efficient cooling and more GB per cubic inch, these servers will still be offered at our standard pricing plan as any savings we found were, unfortunately, offset by the cost of diving equipment.

While this datacenter may come as a pleasant surprise to residents in the surrounding countries, we have actually been actively looking into the possibility since mid-2013, inspired by Facebook’s energy efficient Arctic Datacenter. Some potential issues we faced in our initial investigations included transporting safe electrical current under the sea, providing sufficient illumination on the ocean floor (around 900 meters deep), and our technicians’ inability to swim.

You can easily spin up a server in the new region by selecting “Atlantis” in the Droplet create screen or choosing that location in the API. Our initial run of servers in this region is limited. We will be adding more capacity to H2O at low tide.

When asked about the new location, DigitalOcean’s Director of Infrastructure, Lev Uretsky explained: “Our Datacenter Techs are very excited about Atlantis. We firmly believe that this will be the easiest DC to rack, as our servers become much lighter underwater.”

If this sounds exciting to you, DigitalOcean is actively hiring for the new location. Scuba certified candidates are welcome to apply. Background in Marine Biology a plus.

DigitalOcean Blog

Announcing Our 4th Annual Summer Intern Program for Google Apps Admins

summer-intern-in-austinEvery Summer, since 2011, Coolhead Tech has paid a handful of summer interns to work locally with some of Austin’s most innovative companies.  Students work with Austin startups and businesses with initial deployments as well as organizations who have already adopted Google Apps and are seeking to move from transition to transformation.

The summer intern program for Google Apps Admins in Austin is an excellent opportunity for high school seniors, college students and recent college graduates seeking instruction and experience to earn third party technical certification from Google.


The summer internship program for Google Apps Admins runs between June 22nd and July 22nd, 2015 and includes:

  • Classroom Instruction

  • Real World Experience

  • Paid Internship Opportunities

  • Free Test Domain & Accounts

  • Exclusive Google Resources

  • Includes all Google Exam Fees

If you or someone you know is interested in this opportunity to train with Google Certified Administrators in Austin, request more information now – Classroom space is limited.


Summer of Apps 2015:  Get More Info on Google Apps Admins.


Back to Business

Taming Your Go Dependencies

Internally at DigitalOcean, we had an issue brewing in our Go code bases.

Separate projects were developed in separate Git repositories, and in order to minimize the fallout from upgraded dependencies, we mirrored all dependencies locally in individual Git repositories. These projects relied on various versions of packages, and the problem was that there was no deterministic way to distinguish which project required what and when.

As a team, we knew this approach was not optimal, but coming to a consensus on a single way to manage packages was a tough decision. With a little bit of effort, we arrived at a solution which addressed the issue of managing package versions without needing an external management tool. We call our effort cthulhu, which is our Go repository. We also refer to it as a mono repo.

What’s a Mono Repo?

Building a cloud is fast-paced business. We have Go projects that serve APIs, move bits around from server to server, and crunch numbers. Because many of these projects share a common set of components, we determined it would be easier to create a single Git project and import all the existing projects. Here’s the high level structure of the project:

    ├── docode
    │   └── src
    └── third_party
        └── src

It is a called a mono repo because we only have one repository. Our setup is straightforward. We have a root directory that serves as the base for cthulhu. Underneath this root, we have two additional directories: docode for our code, and third_party for other people’s code.

To develop Go software,set your GOPATH to $ {CTHULHU}/third_party/src:$ {CTHULHU}/docode/src. That’s it!
The reason that the third_party directory is listed first is to ensure that, when packages are fetched using go get, they’ll be installed in this directory’s src/ rather than docode.

At this point, you can create a script that can be sourced into a shell, and you can start developing.

Why Is This Good?

First and foremost, we believe the mono repo is a good idea because using it is frictionless. There are no arcane actions or sacrifices required to configure an individual developer’s workstation.

It is also beneficial because at this point of DigitalOcean’s Engineering team’s evolution, having a single repository for editing software means it is less likely for projects to get lost. Finding code is easy using the mono repo and our team’s simple conventions for naming services. We have three types of code: doge, our internal standard library, which contains code that is reused throughout the repository; services, which contains all of our business logic; and tools, which are one off applications and utilities used to manage our Go code, like our custom import rewriter that sorts and separates imports based on our current code guidelines.

    ├── docode
    │   └── src
    │       ├── doge
    │       ├── services
    │       └── tools
    └── third_party

Because all of our Go is in a single repository, everything uses the same versions of external and internal dependencies. If a package is upgraded, every service which depends on the package receives the new functionality. This helps when dealing with security issues. It’s also nice to not have to manage versions explicitly. For our purposes, the canonical version is what’s under third_party/src. If your work requires an upgrade, you install the new dependency, run the tests, and then send a pull request.

It Isn’t All Rainbows.

Our mono repo is a great solution for us, but it doesn’t come without its own set of caveats.

One of the largest issues is actually an issue with Git. Git prescribes sub-modules for including dependencies in your main repository. When the sub-modules work correctly, there are no problems, but when they don’t work, it’s a thorny pain for everyone involved. In this case, we chose to sidestep the problem. Instead of dealing with sub-modules or an external management solution, we rename the git config directory (if there is one) for our dependencies. Because the .git directory doesn’t exist, Git considers the configuration to be just another set of files. If you want to upgrade the package, just revert the git directory name, and update. This isn’t an amazing experience, but it is simple.

Additionally, when you share a repository with all the other projects, you inherit all the other project’s issues. This means that if one of our individual services has a slow test suite, all services have a slow test suite. In general, testing Go is very fast. When you involve external tests, like database integration, things can slow down. A solution for this is to use the short flag to skip the long tests. An additional solution is to run tests for individual packages. The DigitalOcean Engineering team is still testing and deciding which solutions works best for us.

Where Do We Go Next?

Currently, our mono repo serves our needs well. It is an easy concept for newer developers to grasp, it doesn’t require any external dependencies, and it allows us to co-locate all of our Go code. In a nutshell, it’s a great thing for us and we believe it could be a great thing for other teams working with Go as well.

by Bryan Liles

DigitalOcean Blog

By: zedomax

yeah used to be great 100% uptime for like a year straight for me then suddenly beginning of this year they went downhill, I’ve had about 12-24 hour downtime during their SAN update.  Now it’s like they do SAN update every week, time to get out, something happened with, it’s like you said, 90% uptime at best or 75% for me.  This sucks since they don’t address the issue and fail to provide a solution.  

Comments on: is Dead. Long Live!

Presenting FreeBSD! How We Made It Happen.

We’re happy to announce that FreeBSD is now available for use on DigitalOcean!

FreeBSD will be the first non-Linux distribution available for use on our platform. It’s been widely requested because of its reputation of being a stable and performant OS. While similar to other open source unix-like operating systems, it’s unique in that the development of both its kernel and user space utilities are managed by the same core team, ensuring consistent development standards across the project. FreeBSD also offers a simple, yet powerful package management system that allows you to compile and install third-party software for your system with ease.

One particularly compelling attribute of the FreeBSD project is the quality of their documentation, including the FreeBSD Handbook which provides a comprehensive and thoughtful overview of the operating system. We at DigitalOcean love effective and concise technical writing, and so we’ve also produced numerous FreeBSD tutorials to aid new users with Getting Started with FreeBSD.

We understand that this has been a long standing user request, and we’ve heard you. You might be asking yourself – what took so long?

The internal structure of DigitalOcean’s engineering team has rapidly changed over time due to the dynamic growth of the company. What began as a couple of guys coding furiously in a room in Brooklyn has ballooned to a 100+ person organization serving hundreds of thousands of users around the globe. As we’ve grown, by necessity we’ve needed to adjust and reorganize ourselves and our systems to be able to better serve our users. There have been many experiments in how we approach, prioritize and execute this work; this image is a result of the successful alignment of a few key elements.

Technical Foundation

Last year, we built our metadata service — allowing a droplet to have access to information about itself at the time that it’s being created. This is a powerful thing because it gives a vanilla image a mechanism to configure itself independently. This service was a big part what allowed us to offer CoreOS, and in building it, it gave us more flexibility in what we could offer moving forward. Our backend code would no longer need to know the contents of the image to be able to serve it. On creation, the droplet itself could query for configurables — hostnames, ssh keys, and the like — and configure itself instead of relying on a third party.

This fundamental decoupling is an echo of a familiar refrain: build well defined interfaces and don’t let knowledge leak across those boundaries unnecessarily. It’s allowed us to free images from customization by our backend code, and entirely sidestep the problematic issue of modifying a UFS filesystem from a Linux host.

Since we now had a feasible mechanism to allow images to be instantiated independently of our backend, we just needed to put the parts together that would allow us to inject the configuration upon creation. FreeBSD doesn’t itself offer cloud versions of the OS similar to what Canonical and Red Hat provide, so we started from a publicly available port of cloud-init meant to allow FreeBSD to run on OpenStack.

In order to query metadata, we need to have an initial network configuration in order to build our configuration, since DigitalOcean’s droplets use static networking. During boot time, we bring up the droplet on a v4 link-local address in order to do the initial query to the service. From there, we pick up the real network config, hostname, and ssh keys. The cloud-init project then writes a configuration that’s associated with the droplet’s ID. Linking this configuration to the droplet ID is the mechanism that allows it to know whether the image is being created from a snapshot or new create, or is just a rebooted instance of an already configured droplet.

Once this configuration has been injected, FreeBSD’s boot process can continue and use it accordingly — eventually booting into the instance as expected.


This endeavor began life as an experiment in how we organize ourselves in the engineering team. We were given a few weeks to pick a project, self organize in cross-functional teams, and execute. A lot went right during this process that allowed this project to succeed.

Deadlines are powerful things. Not in a punitive or negative sense of the word, but in a sense that there will be a well defined time where work on this will collectively end. So is having a very clear picture of what “done” looks like. In the case of BSD, it was particularly powerful to have a clear goal of a alpha functional BSD droplet with a date to drive for. Given the freedom to focus on a single goal, clear communication, and well defined restraints, we were able to finally deliver a long standing user request with relative ease.

This is the start to the many things we’re excited to build in 2015!

By: Neal Shrader

DigitalOcean Blog

5 Basic Benefits of Google Business Apps

business-app-userWith apps available everywhere, sometimes it’s hard to decide what apps you should use for your business. When choosing an app it’s important to know how an app will benefit your business.  If you’ve spent any time on the Internet, you already know the name Google, so it shouldn’t surprise you that Google has some of the best business apps available.  Here are five benefits of using Google Business Apps.

Easy to Start and Use

Unlike many apps, the Google Business Apps are intuitive.  Calendars, email, file sharing, contacts, and more are simple to use and will work virtually on any internet connected device.  Google Business Apps automatically update when there is a new feature or upgrade, you no longer have to wait for a new feature, unless your system administrator wants to manually roll out features as required.

As Powerful As You Need It

Google Business Apps has the Google apps script and Google for Work APIs enabling your system’s administrators and software developers to integrate their software with them.  Along with third party integration you will find endless possibilities and extensions enabling you to use Google for Work. 

Cost Efficient

For about a cup of latte from your favorite coffee bistro a month, Google Business Apps can give you the tools to expanding your business.  For only $ 5 a month, you will receive email addresses for your team with your company’s name, 30 GB storage you can use for file storage and sharing, online calendars, and the ability to easily create online spreadsheets, slides, text documents, and more. All these great features including admin controls and security from a name you can trust. If you prepay for a year you will actually save $ 10. 


The safety of your data is top priority at Google. The company is FISMA-Moderate level certified — this is the same level of certification for the internal email usage within the United State’s government. Google is also capable of supporting HIPAA compliance. Google is trusted by millions to virtually secure their email from any threats through routinely checking emails before downloading a document for any threats of viruses, pshing emails, malware and more.

Mainstream Support

Google Business Apps are intuitive, however there may come a time when you need assistance with a certain app. There are a range of resources with excellent support to answer your questions and concerns. You can choose to seek help from Google Certified Admins support, you can also seek advice through Google online forums, online support websites such as Google Gooru,  and companies specializing in Google support. With an array of Google support outlets you can quickly find a solution.

This is why Google Business Apps are the right choice for your business and why you should consider it.  Google Business Apps has the security, reliability, support, and the cost efficient features that will help place your company at an advantage above the rest. Discover how you can lead your company through the use of Google Business Apps today.

Back to Business

By: Socrates

Perhaps it is time for you to move away from too my friend?!

Comments on: is Dead. Long Live!

By: Simon Dann

I have had a couple of really alarming issues with aside from the up and down nature of their service I run a debian 6.0 VPS with them which I cant update because each time I do it breaks due to a flaw in the container that they know about but wont fix.

I also run a number of servers from home for svn, email and personal blog and have managed to keep a 99.9% up-time record for the past six months, better than what my records show I am getting from

Initially they were really good, customer support was top notch and I couldn’t fault them, but as the year has progressed their support’s English skills have diminished and the outages continue to rise both in number and length.

Maybe time to move myself.

Comments on: is Dead. Long Live!

What’s Your Libscore?

The contributors to Libscore, including our own Creative Director Jesse Chase, wanted to offer this post as a thank you for all the support the project has received. Julian Shapiro launched Libscore last month hoping that the developer community would find the tool useful, and continues to be grateful for all of the positivity and constructive feedback throughout the web.

For those who haven’t heard, Libscore is a brand new open-source project that scans the top million websites to determine which third-party JavaScript libraries they are using. The tool aims to help front-end open source developers measure their impact – you can read all about it here.

In this post, we’ll break down the technology that Libscore leverages and discuss some of the challenges getting it off the ground. We were also lucky enough to talk with Julian and get some insight as to where he sees the project going.

Thomas Davis: A Technical Overview

Unlike traditional web crawlers, Libscore thoroughly scans the run-time environment of each website into a headless browser. This allows Libscore to monitor the operating environment on each website and to detect as many libraries as possible – even those that have been pre-bundled and required as modules. The tradeoff, of course, is that running one million headless browser connections is much more resource intensive than performing basic cURL requests and parsing static HTML.

The biggest insight we gained while designing the crawler is that the best way to weed out false positives for third-party plugins is to leverage the broader data set we’re aggregating. Specifically, we weed out third-party libraries that didn’t exist on at least 30 sites out of the 1 million crawled. Using meta-heuristics like these allowed us to more confidently detect libraries that were in fact third-party plugins, and not just arbitrary JavaScript variables that were leaking to the global scope.

On the backend, crawls are queued via Redis with the results stored in MongoDB. Both services are loaded fully into RAM which allows our RESTful API to serve requests faster than it would querying the disk. The main bottleneck to crawling concurrency is network bandwidth, but thanks to DigitalOcean, it was a breeze to repeatedly clone instances and run crawls during off-peak times in different regions. Ultimately, using just a few high-RAM DigitalOcean instances, we parse 600 websites per minute and complete the entire crawl in under 36 hours at the end of each month.

As the crawler runs, raw library usage data for each site is appended to a master JSON file, which we simply read from the file system with Nodejs. Once all the raw usage data is collected we start a process dubbed “ingestion”, which is responsible for aggregating the results and making them accessible via the API. We actually attempted to load the entire dataset into ram to perform our calculations, but quickly ran into a quirky problem with V8 not being able to allocate anymore than approximately 1GB of memory for arrays. For now, we are splitting up the raw dump into smaller files to bypass the memory limit, though in the future we might just rewrite the project to use a more suitable language and environment.

Jesse Chase: Design Improvements

While Libscore currently serves as an invaluable tool for surfacing library adoption data, the future is even more exciting. To illustrate it let’s jump ahead 6 months – smack in the middle of summer. At this point, Libscore will have crawled through the top million sites 6 times already (or 6 million domain crawls!), bringing forth rich month-over-month trend data on library usage.

By providing users with a soon-to-be-released time series graph, with the ability to plot multiple libraries over the same time period, developers will gain new insights into how libraries are changing over time. For example, users will be able to see why a library’s usage plummeted from one month to the next – potentially due to the increased adoption of another library. Soon, this data will be fully visualized.

Julian Shapiro: The Future Of Libscore

Libscore is more than a destination for JavaScript statistics; it’s also a data store that can be leveraged in the marketing of open source projects. One way we’re enabling this is via embeddable badges that showcase real-time site counts. Open source developers can show off these badges in their GitHub README’s, and journalists writing about open source can similarly include them to provide context on the real-world usage of libraries.

In addition to badges, we’re also releasing quarterly reports on the state of JavaScript library usage. These reports will showcase trends, helping developers learn which libraries are rising in popularity and which are falling. We hope these reports will become a valuable contribution to discussions around the state of web development tooling, and to finally provide the community with concrete data they can use to make decisions.

Creator and developer – Julian Shapiro
Backend developer – Thomas Davis
Creative Director – Jesse Chase

DigitalOcean Blog

Linux Web Hosting Package

Web designers often look for the web hosting packages and services before executing the plan of their web designing. There is wide variety of web hosts services available which compiles all the needs of the web designers and offer excellent services. But to choose from the plethora of services is a tedious task. Before executing their plan of designing website, the first step is to choose the appropriate domain name that suits the website. Web hosting package is the next and most important step where you will need to upload your files for configuration.

Web hosting packages plays an important role in the web designing. Various web hosting services are currently available in different operating systems such as Linux hosting, VPS hosting , Windows etc. There are certain essential ingredients which are needed for every hosting plan. It is important for web designers and webmasters to choose the appropriate and best suited web hosting plans and services.

One of the latest and common hosting plans is Linux hosting, which is more secure and efficient than other operating systems as well as hosting plans available in today’s era of advance technology. It is indeed considered as the most popular hosting services since it is freely available software around the web. This plan comes with a bunch of services and components such as Apache server, PHP language and mySQL database system. High performance, greater stability and even lower cost are the key features of the Linux operating system and are preferred by most of the designers due to its huge advantages over other operating systems.

Advantages of Linux Hosting web services:

There are various advantages that make this service stand out in today’s era and make it a preferable choice among the designers and webmasters.

  1. 1.Availability: This web hosting services are freely available around the web. Since it is open source software, designers don’t have to invest higher budget in hosting and the software can be changed by any person according to the requirement.
  2. 2.Secure and Stable: The key features of this hosting service are its security and stability which yield high productivity and efficient results in terms of performance.
  3. 3.Low Cost: The web hosting service suits best for those individuals who do not have high budget, the web hosting plan is quite cheaper and can be the preferred solution.
  4. 4.Combined Package: This hosting plan comes with a bundle of services comprising of FTP Access, Common gateway interface (CGI) mechanism, mySQL Database systems. These services make this hosting plan different from rest of others. This is the main reason that most designers find it an appropriate web hosting plan in today’s time.
  5. 5.Effective Administration services: The administration services offered by this platform are very effective and provide you excellent results.
  6. 6.Reliable: The above hosting plan is very reliable as it provides you with a facility of solving every technical problem that can arise during the working. Its reliability, security and performance are added features and advantages of this hosting plan.