Use Google Cloud Functions to auto-load your data imports into Google Analytics form Cloud Storage
|Home||My R Packages||Non-code blog|
With the launch of the Google Natural Language API (NLP API), and the emphasis of machine learning that is said to account for up to 30% of the SEO algorithmn for Google search, a natural question is whether you can use Google’s own macine learning APIs to help optimise your website for search. Whilst I don’t believe they will offer exactly the same results, I can see useful applications that include:
I recently got an Asus Chromebook Flip with which I’m very happy, but it did make me realise that if a Chromebook was to replace my normal desktop as my primary workstation, my RStudio Server setup would need to be more cloud native than was available up until now. TL;DR - A how-to on making RStudio Server run on a Chromebook that automatically backs up data and configuration settings to Google Cloud Storage is on the googleComputeEngineR website here.
A common question I come across is how to automate scheduling of R scripts downloading data. This post goes through some options that I have played around with, which I’ve mostly used for downloading API data such as Google Analytics using the Google Cloud platform, but the same principles could apply for AWS or Azure. HUGOMORE42 Scheduling scripts advice But first, some notes on the scripts you are scheduling, that I’ve picked up.
A full list of R packages I have published are on my Github, but some notable ones are below. Some are part of the cloudyR project, which has many packages useful for using R in the cloud. I concentrate on the Google cloud below, but be sure to check out the other packages if you’re looking to work with AWS or other cloud based services. CRAN Status URL Description googleAuthR The central workhorse for authentication on Google APIs googleAnalyticsR Works with Google Analytics Reporting V3/V4 and Management APIs googleComputeEngineR Launch Virtual Machines within the Google Cloud, via templates or your own Docker containers.
A new year, a new blogging platform! This time I’m moving from Jekyll to RStudio’s new blogdown format. This keeps the advantages of Jekyll (a static, high performance website; markdown for editing; free hosting on Github) but with the extra bonus of being able to render in RMarkdown plus adding some nice looking capabilities from the Hugo project.
As analysts, we are often called upon to see how website metrics have improved or declined over time. This is easy enough when looking at trends, but if you are looking to break down over other dimensions, it can involve a lot of ETL to get to what you need. For instance, if you are looking at landing page performance of SEO traffic you can sort by the top performers, but not by the top most improved performers.
I’ve written previously about how to get RStudio Server running on Google Compute Engine: the first in July 2014 gave you a snapshot to download then customise, the second in April 2016 launched via a Docker container. Things move on, and I now recommend using the process below that uses the RStudio template in the new on CRAN googleComputeEngineR package. Not only does it abstract away a lot of the dev-ops set up, but it also gives you more flexibility by taking advantage of Dockerfiles.
There are now several packages built upon the googleAuthR framework which are helpful to a digital analyst who uses R, so this post looks to demonstrate how they all work together. If you’re new to R, and would like to know how it helps with your digital analytics, Tim Wilson and I ran a workshop last month aimed at getting a digital analyst up and running. The course material is online at www.
Avoiding sampling is one of the most common reasons people start using the Google Analytics API. This blog lays out some pseudo-code to do so in an efficient manner, avoiding too many unnecessary API calls. The approach is used in the v4 calls for the R package googleAnalyticsR. Avoiding the daily walk The most common approach to mitigate sampling is to break down the API calls into one call per day.
In this blog we look at a method to estimate where to prioritise your SEO resources, estimating which keywords will give the greatest increase in revenue if you could improve their Google rank. Overview Thanks to Vincent at data-seo.com who proof read and corrected some errors in the first draft Data comes from Google Search Console and Google Analytics. Search Console is used to provide the keywords in these days post (not provided).
edit 20th November, 2016 - now everything in this post is abstracted away and available in the googleComputeEngineR package - I would say its a lot easier to use that. Here is a post on getting started with it. http://code.markedmondson.me/launch-rstudio-server-google-cloud-in-two-lines-r/ This blog will give you steps that allows you to run on Google Compute Engine a server that has these features: RStudio Server instance with multiple login. Apache to host a welcome webpage.
googleAuthR is now on CRAN version 0.2.0. This release is the result of using the library myself to create three working Google API libraries, and tweaking the googleAuthR code to better support the process. As a result all of these libraries are now able to be authorised with one Google OAuth2 login flow: googleAnalyticsR searchConsoleR bigQueryR Batching This means the libraries above and any other created with googleAuthR can take advatage of batching: this uses a Google API feature that means you can send multiple API calls at once.
Hello World! Welcome to my new home for code blogging. I’ll keep the regular blog for all the other stuff, running on Posthaven. Why? Its educational, and free, and writing code posts using Posthaven’s rich text editor was sometimes painful. Using this setup means its all written in Markdown so I get control over everything just the way I want it, and since it combines Github and Markdown, two things I know well already, Jekyll seemed worth knowing.