Catégorie BigData

From Pandas to Apache Spark’s Dataframe

With the introduction in Spark 1.4 of Window operations, you can finally port pretty much any relevant piece of Pandas’ Dataframe computation to Apache Spark parallel computation framework using Spark SQL’s Dataframe. If you’re not yet familiar with Spark’s Dataframe, don’t hesitate to checkout my last article RDDs are the new bytecode of Apache Spark and […]

RDDs are the new bytecode of Apache Spark

With the Apache Spark 1.3 release the Dataframe API for Spark SQL got introduced, for those of you who missed the big announcements, I’d recommend to read the article : Introducing Dataframes in Spark for Large Scale Data Science from the Databricks blog. Dataframes are very popular among data scientists, personally I’ve mainly been using them with […]

Changing Spark’s default java serialization to Kryo

Apache Spark’s default serialization relies on Java with the default readObject(…) and writeObject(…)  methods for all Serializable classes. This is a very fine default behavior as long as you don’t rely on it too much… Why ? Because Java’s serialization framework is notoriously inefficient, consuming too much CPU, RAM and size to be a suitable large scale serialization […]

Try Apache Spark’s shell using Docker

Ever wanted to try out Apache Spark without actually having to install anything ? Well if you’ve got Docker, I’ve got a christmas present for you, a Docker image you can pull to try and run Spark commands in the Spark shell REPL. The image has been pushed to the Docker Hub here and can be […]

Apache Spark : Memory management and Graceful degradation

Many of the concepts of Apache Spark are pretty straightforward and easy to understand, however some lucky few can be badly misunderstood. One of the greatest misunderstanding of all is the fact that some still believe that « Spark is only relevant with datasets that can fit into memory, otherwise it will crash ». This is an understanding mistake, […]

Apache Spark : l’importance du broadcast

Apache Spark est un moteur de calcul distribué visant à remplacer et fournir des APIs de plus haut niveau pour résoudre simplement des problèmes où Hadoop montre ses limitations et sa complexité. Ce billet fait partie d’une série de billet sur Apache Spark permettant d’approfondir certaines notions du système du développement, à l’optimisation jusqu’au déploiement. Un […]

How to test and understand custom analyzers in Lucene

I’ve began to work more and more with the great « low-level » library Apache Lucene created by Doug Cutting. For those of you that may not know, Lucene is the indexing and searching library used by great entreprise search servers like Apache Solr and Elasticsearch. When you start to index and search data, most of the […]

Book review : ElasticSearch Server by Rafal Kuc, Marek Rogozinski

I’m not usually doing a lot of book reviews, mainly because i’m usually not finishing any book i begin… But i decided to finish this one, and i wanted to express my views on this book. If you look at the reviews of ElasticSearch Server on amazon.com you will get a first opinion that i can only […]

Elasticsearch is the way

Don’t get me wrong, i love Apache Solr, i think it’s a wonderful project and the versions 4.x are definitely something you should check out when building a proper search engine. But Elasticsearch, at least for me, is now the way to the future. If you need a few reasons why, read on : Out of […]