I started using Adobe Experience Manager (CQ 5.6.1) with a focus on component development and building OSGi services and I strongly believe that learning how to leverage AEM’s capabilities (as well as it’s underlying technologies like Apache Sling) are key to a successful CMS implementation.
With that in mind, I’ve been keeping a list of useful tips and tricks that I’d like to share with you. These are mostly about increasing productivity when working with AEM or just general things I wish I knew about earlier. This post is targeted more at developers starting out with AEM but I’m also hoping more seasoned users can benefit from it too.
I’m back at re:Invent 3 years after the inaugural conference and I’m keen to know what changed, not in the offering of the platform but more about what people are doing with it and what other technology trends there are concerning the cloud.
As we turn up to the partner keynote (the day before the main conference starts,) the first thing that is apparent is the sheer scale of the event. Last time around the partner keynote was in a smallish room with a few hundred people. This year there are 6,000 partners in the room – as many people as attended the whole conference in 2012.
One of the projects that I’m currently working on is developing a solution whereby millions of rows per hour are streamed real-time into Google BigQuery. This data is then available for immediate analysis by the business. The business likes this. It’s an extremely interesting, yet challenging project. And we are always looking for ways of improving our streaming infrastructure.
As I explained in a previous blog post, the data/rows that we stream to BigQuery are ad-impressions, which are generated by an ad-server (Google DFP). This was a great accomplishment in its own right, especially after optimising our architecture and adding Redis into the mix. Using Redis added robustness, and stability to our infrastructure. But – there is always a but – we still need to denormalise the data before analysing it.
In this blog post I’ll talk about how you can use Google Cloud Pub/Sub to denormalize your data in real-time before performing analysis on it.
You may have already read our previous post here about Shine’s Pablo Caif & Graham Polley being nominated to become ‘Google Developer Experts‘ (GDE).
Well, today Shine are proud to announce that both guys have been officially awarded the braggable title of GDE, and we’d like to congratulate them on this mighty big achievement!
Becoming an expert, meant undergoing a stringent evaluation and interview process, as well as being nominated by a Google employee, authorised by the Google Developers team, and all based on the special contribution they make to their field.
Their acceptance to the program makes them only the second and third GDEs in Australia. AWESOME! You can check out Pablo’s official profile here, and Graham’s here. Once again, congrats to you both!
For over a decade, I have been working with developers, business stakeholders, and users to create digital experiences and/or services that are designed to inform, inspire, and entertain. While creating these experiences I have noticed a certain lack of understanding between software developers and UX designers. In this post I’ll talk about strategies I’ve used to bridge this gap.
My work commute
My commute to and from work on the train is on average 17 minutes. It’s the usual uneventful affair, where the majority of people pass the time by surfing their mobile devices, catching a few Zs, or by reading a book. I’m one of those people who like to check in with family & friends on my phone, and see what they have been up to back home in Europe, while I’ve been snug as a bug in my bed.
Stay with me here folks.
But aside from getting up to speed with the latest events from back home, I also like to catch up on the latest tech news, and in particular what’s been happening in the rapidly evolving cloud area. And this week, one news item in my AppyGeek feed immediately jumped off the screen at me. Google have launched yet another game-changing product into their cloud platform big data suite.
It’s called Cloud Dataproc.
Posted in Big Data, Cloud, DevOps, IaaS, Linux, Opinion, Tools
Tagged Big Data, Cloud, Cluster Computing, Clustering, Google BigQuery, Google Cloud, google cloud dataflow, Google Cloud Platform, Google Cloud Storage, Hadoop, IaaS, Java, MapReduce, Python, R, Scala, Software Engineering, Spark, YARN