content top

Open Source Hardware Educational Robotics


Atto Educacional toolkit

“The greater importance of free and open source hardware is that the student can master the technology and gain its autonomy: this way, he/she can effectively innovate, creating new products from open source solutions”. The phrase is credited to professor Douglas Sulis da Costa, a free and open source software enthusiast and one of the founders of Atto Educacional, focused on teaching robotics to children and adolescents in high school. Douglas attended the 16th International Free Software Forum, recently held in Porto Alegre, Brazil, where he took part in a roundtable with the theme “Open Source Hardware Educational Robotics”.

Besides him, Claudio Olmedo, a well-known militant of free hardware in Brazil and co-founder of Centro Maker, the first company specialized in free hardware for Makers, also attended the meeting, as well as Enoque Alves, Master in Computer Science and a member of “Jabuti Edu” project, and Germano Postal, Master in Mechanical Engineering, who moderated the debate. The main subject was basically “how Brazilian industry can promote and make money with free and open source hardware development?”, a question raised by many, and always controversial by the “proprietary nature” of computer systems developed in Brazil.

The good news for the hardware and free software community and enthusiasts is that this scenario has been changing in recent years, albeit at a slow pace. Professor Douglas, for example, after much applying Piaget dynamics when teaching robotics at the classroom, stimulating children and adolescents to think about a problem and collaboratively get a solution using basic concepts of engineering, electronics and computer programming, said that only in 2006 he achieved the financial support to create Atto Educacional.


“Jabuti Edu” Project

Alongside Atto Educacional proposal, the “Jabuti Edu” project introduces principles of programming into play through an educational robot created using a 3D printer. Developed collaboratively by community, the robot works similarly to a radio-controlled car, and it is possible to program short sequences of movements and observe the results using only a browser on a mobile device. The central idea of the project is to transform abstract knowledge in concrete knowledge using Logo programming language, aimed to children.

The Jabuti Edu can be used by children as young as 4 years old, since basic education, and then introducing the learning of electronics and robotics in pre-adolescence, between 10 and 12 years, followed by hardware scanning and free and open source software from 15 to 16 years. A curiosity: the name “Jabuti”, a typical Brazilian species, is an allusion to the “turtle” simulation presented in Logo programming language environment.


For Makers by Makers

Imagine a company dedicated to helping people to get a renewed vigor to robotics and free and open source hardware ecosystem innovations (something known by the term “Makers” in technology communities). That is actually the challenge of a company called Centro Maker, which aims to popularize free and open source hardware — and therefore free and open source software too — helping the “Makers” to become truly social entrepreneurs in their professional fields.

Claudio Olmedo, one of the co-founders of Centro Maker, launched at Campus Party Brazil 2015 a project called “The Chestnut” to foster free and open source hardware development. The intention is that “The Chestnut” project costs only 1 dollar, so it can be accessible to anyone, anywhere in the world, focusing on developing countries. The project is underway and should be presented during the 1st Latin American Free Hardware Forum in October, which is part of 12th Latin American Free Software Conference (Latinoware).



Educational Robotics:
Jabuti Edu:
Atto Educacional on YouTube:
Jabuti Edu – Use Case:
Centro Maker:
Latinoware 2015:

Read More

How to Build Up a Free Culture?

Children in Porto Alegre city (south of Brazil) learning how to use free open source software.

Educommunication, open source robotics, collaborative peer production and the use of open source technology based on Linux operating system. These are the premises that guide professors Cristina Santos, Daniela Bortolon and Jacqueline Aguiar on a daily basis to promote free culture in the context of education for children and teenagers in Porto Alegre city, Brazil.

But first of all, what “free culture” stands for? And what is your relation in the educational context? In a two-hour meeting in the International Free Software Forum (FISL 16), held in Porto Alegre between July 8th – 11th, the educators highlighted the main differences between the subject’s past and present. “How to form a critical person, capable of transforming the world and the culture we live in today?” – asked Cristina Santos in her talk.

Such is the importance of the free culture concept to develop a thinking and active person in his/her way of living. And the biggest challenge for the implementation of such a way of thinking about education, they say, came up first in faculty strength, naturally averse to sensible changes in their school environments, and also on the fascinating world of discovery of free technologies and the benefits they can bring to both learning students and instructors.

The starting point was the installation of Linux Educacional 4.0 distribution on computers in schools, offering a series of educational programs and a friendly interface that facilitates its use and approach, inviting educators and students to get involved in new possibilities and modalities to learn. The next step was the adoption of measures aiming to deconstruct the negative memory associated with the use of Linux as a free operating system. One of them was the creation of “Linux Sharing” group — “Linux Compartilhando”, in Brazilian Portuguese — formed in 2013; today it has over 100 active members. Through the group, it was possible to whet the curiosity of high school students to seek and share knowledge with classmates and other classes of schools they attend.

Another initiative was the creation of an e-learning course and a virtual forum based on Edmodo environment. Also the introduction of open source programs for extracurricular work was also very important, like OpenShot video editor, Audacity audio editor, LibreOffice package — which is Brazilian — among many others. In 2015, the pioneering work of the teachers continues to expand the network of knowledge in other cities in the countryside.


How to build up a free culture? (in Brazilian Portuguese)
Linux Educacional 4.0:

Personal blogs:


Read More

Your Website as Fast as Google Using ElasticSearch

Many websites are reference in their business due to content relevance. But is not enough just offer information, product or service quality, it is necessary to make it easier for user to find what he or she seeks so much. And why is that? Because when the user does not find what he/she needs, then the best to do is give up and leave the website.

To avoid this, what should you do? One solution is to use a MySQL Full Text Search mechanism that apparently will solve the problem for a while. But let’s say the CEO of your online store decides to launch a big promo sales on Black Friday (for example) and, to boost access and sell even more, promotes a huge campaign using Google Adwords and Facebook Ads. Suddenly, the stream of people desperately searching for sales on the website increases 10 times. And therein it gets tricky for you who decided to use the Full Text Search functionality.

And then comes the question: do you were actually prepared to face this situation? To avoid inopportune surprises, Breno Oliveira, Sun Certified Java Programmer and Scrum Master Certified web developer, presents how ElasticSearch will help you on this.



The ElasticSearch is an open source search engine built on top of Apache Lucene, providing a powerful full-text search engine. It holds a friendly RESTFul API, real-time data, high availability (HA), guidance documents, among other features. Companies like GitHub, Twitter, Google, Ebay, FourSquare, Bloomberg, The Guardian and Yelp already use ElasticSearch in production to search and add in real-time basis.


Installing ElasticSearch

ElasticSearch installation is actually quite simple. Just download the latest version, unzip it and run bin/elasticsearch. Or access bin/elasticsearch.bat and make a $ curl -XGET http://localhost:9200/ request.


Some concepts before getting started

To make it easier to understand some concepts and terms, let’s look to a comparative ElasticSearch with a relational database (MySQL) table:

Another important concept is to understand how it stores your documents. Anyone who has used a NoSQL database before — like MongoDB — will not have problems in understanding it. This mechanism uses the JSON structure, supported by most programming languages.


Creating the first records

To better follow the following steps, you can use a REST client of your choice. In this example, we will use a “curl” for simplicity.
Template to insert data into ElasticSearch:

$ curl -X PUT http://localhost:9200/produtos/biclicletas/1 -d '{
"modelo": "speed",
"nome": "Specialized Tarmac",
"marchas": 14,
"cor": "azul",
"tags": [
"14 marchas"
"valor": 1500000

What we did was this: insert “produtos” in Index and “biclicleta” in Type, added one “biclicleta”, and its id is 1. If you do not provide an Id for ElasticSearch document, it will generate an id for you.

You will have the following response from ElasticSearch:

"_index": "produtos",
"_type": "bicicletas",
"_id": "1",
"_version": 1,
"created": true,
"_source": {
"modelo": "speed",
"nome": "Specialized Tarmac",
"marchas": 14,
"cor": "azul",
"tags": [
"14 marchas"
"valor": 1500000

If you want to search the record above, simply run the following request:


$ curl -XGET http://localhost:9200/produtos/biclicletas/1


There are two ways to do queries in ElasticSearch. The simplest is just using query strings in the URL request. We use this format usually for simple and quick searches. The other way is for more complex queries, by sending a JSON with Query DSL. In general, this option is used for more refined results or aggregations among other ElasticSearch features.

Now, suppose you want to search by “bicicletas azuis” (blue bikes in Portuguese) using Query String:


$ curl -XGET http://localhost:9200/produtos/biclicletas/_search/?q=cor:azul


In the query above, we send the Field color with the desired color – in this case, “azul” (blue). Thus, the following query will search for all bicycles that have blue color. The ElasticSearch will return the following result:

"took": 3,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
"hits": {
"total": 1,
"max_score": 0.30685282,
"hits": [
"_index": "produtos",
"_type": "bicicletas",
"_id": "1",
"_score": 0.30685282,
"_source": {
"modelo": "speed",
"nome": "Specialized Tarmac",
"marchas": 14,
"cor": "azul",
"tags": [
"14 marchas"
"valor": 1500000


To perform the same search using Query DSL:

$ curl -X GET http://localhost:9200/produtos/bicicletas/ -d '{
"query": {
"match": {
"cor": "azul"


This was only an introduction to ElasticSearch, which includes features such as geolocation and  web analytics. Another interesting point is that the ElasticSearch has several libs for you to use in your favorite programming language. It is worth visiting their website and check it out!

Source: iMasters

Read More

IT Trends and Opportunities in Brazilian Aerospace Sector

Jorge Vicente Lopes da Silva talks about the revolutionary role of 3D printing technology to solve complex problems

Demands, trends and IT opportunities in the supply chain of aerospace industry were the issues discussed at an event held on June 2 in the Technological Park of São José dos Campos, São Paulo State, Brazil. Organized by Softex (Association for Brazilian Software Excellence Promotion), the event “Different markets generate great opportunities” was supported by the Technology Park, CECOMPI and Cluster TIC Vale, which brings together more than 60 companies established in the region, many of them of aerospace sector. Among the speakers, Anderson Borille, from ITA, spoke about trends in manufacturing, especially those envolving the German 4.0 industry and its current opportunities.

Then Jorge Vicente Lopes da Silva, Division Chief of Tri-dimensional Technologies at CTI Renato Archer, spoke about the fundamental role of additive manufacturing in metal(3D printing) for aerospace sector, and the revolutionary role of 3D printing technology to solve logistical problems as well — with parts manufactured on the spot, no longer transported from one location to another — and to create devices of complex geometry. Jorge also presented InVesalius, an open source software for reconstruction of computed tomography and magnetic ressonance images, available for Microsoft Windows, GNU/Linux and Apple Mac OS X platforms.

The event also marked an important moment in Softex. His mentor, Eduardo Garcia, died a week before. “The event ‘Different markets generate great opportunities’ was his idea and he was always very interested in connecting applicants and bidders. In his view, there are various segments and many needs unmet and IT vendors able to meet them and, therefore, we can play an important role connecting both parts”, said Virginia Duarte. His legacy, however, is still alive in the entity.


The Atlas of Economic Complexity:
Additive Manufactoring:
German 4.0 Industry [in German]:
Industry 4.0:

Read More

11 Tips and Recommendations When Using APIs

Mauro Pichiliani develops different sorts of software and frequently attends hackathons and tech events in Brazil. During his activities, he usually needs to learn a new API to perform some task. However, in most cases, he ends up with a lack of information needed to use the API documentation properly. Mauro says that “it is actually important to note that only a good documentation does not make an API successful but other factors must be considered.” In this article, Mauro Pichiliani features 11 tips and recommendations for those who want to get the best out of it:

1. Samples all over the place




Many developers prefer to go straight to the code samples and spend little or no time at all studying and learning concepts. This is one of the reasons why it is important to always provide several samples of API usage, which means that we need to go beyond the basic “Hello, World”. A good tip is to provide at least one sample for each API important feature. It is also worth to provide samples in each of the major languages/environments in which the API will be used and highlight how it would behaves in certain situations (bad or no Internet connection, usage limit exceeded etc). It is also worth remembering that the more didactic, objective and realistic those samples may be — and similar to use cases popular in the market — more suitable the samples will be.


2. Have a test environment



The developer who is starting to use an API will make several tests before putting it into use officially (in production). Because of this behavior, it does makes sense that an API should be used in a test or evaluation environment because then it is possible to “play” and find out what you can be able to do using it. In some cases, especially in APIs that involve the use of monetary values, it is worth investing in trial versions with some type of limited usage by time or feature.

Some APIs require several steps to be used, such as registration, authorization, and token validation. These important steps may discourage some developers to perform a quick test. For these cases, it is recommended to create a “playground” or a ready environment where the developer can get a taste of how the API works without having to go through various environment configuration steps and obtain authorization and authentication features.

3. Documentation beyond the basics




Documentation is a required feature for those who produce an API. There are several types of documentation, but what I search for when I’m learning is something that goes beyond a simple “JavaDoc” or something generated automatically. I look foward particularly for diagrams, architectures and some kind of visual information that helps me to understand the prerequisites, the workflow, the features and the necessary details so that I can do the job using the API. “Getting Started” guidelines are also great starting points for those who access the API website for the first time and have no idea of “where and how” to start using it.

Among the important items of a good documentation is to make clear where the API can and should be used and where it makes sense using it. This is important [for the developer] to know how far it is possible to go with it and what are the scenarios and use cases where it is NOT recommended as well. It is also very useful to highlight the updated list of bugs for each version of the API, for there is nothing more frustrating than wasting so much time trying to discover a bug after it has been fixed in a newer version of API.

4. A good FAQ and RTFM




I love FAQs (Frequently Asked Questions) when I am learning how to use an API. This format of quick questions and answers helps me a lot to understand the purpose and use of the API features. Not to mention that this way it is easy to find out what other people are trying to do with the API.

However, it is NOT worth to include old or outdated information that do not make sense anymore with obsolete technologies. It is also important to adopt a lighter tone, humorous and didactic when writing the FAQ — regarding the official documentation — as a more friendly content encourages developers to work with the API. On the other hand, try to avoid sarcasm, or treat the developer as “stupid” or the silly “read the manual”, indicating the polite “RTFM”. So try to be gentle.


5. Build up a communication channel



A successful API should have some sort of communication channel with the developer. This channel may be an email, Twitter, contact form or any other thing that allows the developer with a technical question to be heard. I emphasize that by indicating the average response time in asynchronous communications (such as an email) helps a lot to have that feeling that the “client” is important and will be heard.

In general, the communication channels for developers should be more simple, short and quick, because whom creates software and goes so far as to contact the developer probably has done several tests and found a barrier that made stand still and call for help. Therefore, communication should be direct and right to the point so that the development is not further delayed.

6. Make it clear prices, type of access and use limitations




Many APIs rely on a business model that charge for use, according to some kind of list of features, time or number of calls. It is extremely important to make clear what are the values, types of usage (free package, normal or advanced), payment and collection modes, and also how the refund of values works. It is common that APIs are initially free and then reach a certain number of users to become paid. At first, such a business model change can frustrate and scare some developers. In this case, what you want to hear is a clear communication of how things will be once the service becomes payable, including periods, limitations, advantages and disadvantages of how the API service was and how it will be in the new business model.

7. Keep a decent history

Every software project that changes every new version has a history behind. That history is important to show not only the project steps, but also to make it clear which direction it is taking. Several APIs are constantly changing according to the environment, for instance, new browser versions, features to make it faster, changes to support new platforms and general adaptations to new technologies and web development trends. Therefore, knowing how to indicate historical properly is important to provide a context and make clear the reasons why the API is presented today.


8. Have a troubleshooting guide


male programmer with headset


It is very common to find problems and difficulties when studying a new API or when certain environmental conditions may change. At such times, it is important to have a troubleshooting guide that will indicate, step by step, what are the points to be observed.

In the case of APIs, one troubleshooting guide should tell step by step how to perform a connection test, which parameters are required, what kind of return is filed for each request and what to do if any of the steps fail. This type of guide is very useful when debugging a problem and it is not known which of the solution components is failing. In this situation, a diagnostic guide and step by step instructions are extremely useful in helping to first understand what is going on and then taking some action to remedy the problem.

9. Roadmap and new features




One factor developers takes into account when studying and pondering whether it is worth or not to use an API is its continuity. In other words, ask yourself: is it worth to study, invest time and change an application to use the API if it won’t serve me in the future? In other words, no one wants to use a service that looks like an old western town abandoned; so a good way to make clear the API project goes is to provide a roadmap in a chart showing the latest API versions and the planning for the next versions. Thus, the API developer shows its commitment to maintain the service in the future and makes it clear that the project is serious and not just something temporary with an uncertain future.

10. Highlight main projects using the API




An API is typically part of a larger software even if this software is just a graphical interface for your use. If the design of an API was well planned and it is something really useful and flexible, it is normal to expect different types of projects making use of it. Highlight and emphasize different projects using the API is a great way to encourage people to use it too and value the service. When I am evaluating an API and I see several projects using it, I have a sense of confidence because if someone is already using this service in some way, it motivates me to use it as well.

11. Foster the community




No one wants to be that guest who arrives at the party first and realize later that it was the one who showed up. This means that foster a community is a very important step for developers using the API so they do not feel isolated and lonely. There are several ways to deal with the community, including events like hackathons, programming dojos, meetings where people get along to translate other programming languages or produce documentation, or even social events where community members go out to take some drinks and talk about the project. Regardless the meeting format, it is worth investing in the user community so that will be possbile to improve the service, hear how it is used and get in touch with someone that rely on the service.

Source: iMasters


Read More

MySQL Replication in 5 Minutes

mysqlConfigure MySQL replication is extremely simple. This article demonstrates how to create a master in minutes replicating it for a slave. Replication is a native MySQL feature and has multiple uses, such as backup, high availability, redundancy and geographical data distribution, horizontal scalability, among others.

For this simple test, we use Linux and configure replication in MySQL 5.6 between two instances: a master and a slave. We will create two instances MySQL from scratch, that is, without data. They will be on the same machine, but responding in different ports: 3310 and 3311.

The only requirement is to have MySQL 5.6 installed.

» If you have it installed, simply use the path where resides bin/mysqld as basedir in the steps below. For example, in Oracle Linux 7 or RHEL 7 the binary is located in /usr/sbin/mysqld, therefore basedir=/usr;

» If you do not have MySQL 5.6 binaries, just download the tar file and unpack it in a convenient directory that will be your basedir, such as /opt/mysql/mysql-5.6:

# mkdir /opt/mysql
# cd /opt/mysql
# wget
# tar xvzf mysql-5.6.23-linux-glibc2.5-x86_64.tar.gz
# rm mysql-5.6.23-linux-glibc2.5-x86_64.tar.gz
# mv mysql-5.6.23-linux-glibc2.5-x86_64 mysql-5.6

Note: In this case, consider the steps below basedir=/opt/mysql/mysql-5.6; always try to work with the latest versions — replace 5.6.23 in the above commands if a brand new one is available in


Simple replication

Create an instance for the master:

# mkdir /opt/mysql/master /opt/mysql/master/data /opt/mysql/master/tmp
# cd /opt/mysql/master
# nano master.cnf
# chown mysql:mysql *
# /usr/bin/mysql_install_db --defaults-file=/opt/mysql/master/master.cnf --user=mysql

Start and test the new instance:

# /usr/bin/mysqld_safe --defaults-file=/opt/mysql/master/master.cnf &
# mysql --defaults-file=/opt/mysql/master/master.cnf -uroot -p
master> SHOW VARIABLES LIKE 'port';
| Variable_name | Value |
| port | 3310 |

Note: To stop the MySQL process when necessary, make a clean shutdown:

# mysqladmin --defaults-file=/opt/mysql/master/master.cnf -uroot –p shutdown

Open another terminal and create another instance to be the Slave:

# mkdir /opt/mysql/slave /opt/mysql/slave/data /opt/mysql/slave/tmp
# cd /opt/mysql/slave
# nano slave.cnf
# chown mysql:mysql *
# /usr/bin/mysql_install_db --defaults-file=/opt/mysql/slave/slave.cnf --user=mysql
Start and test the new instance:
# /usr/bin/mysqld_safe --defaults-file=/opt/mysql/slave/slave.cnf &
# mysql --defaults-file=/opt/mysql/slave/slave.cnf -uroot -p
slave> SHOW VARIABLES LIKE 'port';
| Variable_name | Value |
| port | 3311 |

Now that we have two instances with different server-ids and log-bin enabled, create a user in the master instance so the slave can connect to it:


master> CREATE USER repl_user@;
master> GRANT REPLICATION SLAVE ON *.* TO repl_user@ IDENTIFIED BY 'repl_user_password';


Note: In an actual installation, the slave instance will probably be in another host — replace with the host IP where your slave instance is.

Before starting replication, check the Master status:


*************************** 1. row ***************************
File: master-bin.000003
Position: 433

Use the data status above and initiates replication on the slave:

Basic test:

master> CREATE DATABASE teste_repl;
master> CREATE TABLE teste_repl.simples (id INT NOT NULL PRIMARY KEY);
master> INSERT INTO teste_repl.simples VALUES (999),(1),(20),(5);
slave> SELECT * FROM teste_repl.simples;



Read More

7 Tips to Optimize Up to 300% the Loading Speed in WordPress

The loading speed is an important aspect of a website, both for SEO and user experience. For users, it is frustrating to wait for a website to be loaded for a long time and usually when it takes more than three seconds, the user just leaves the website. Simple as that. For Google, it is not interesting to deliver a website that will probably be abandoned by users. All the webdesign work and content creation will be in vain if users do not have patience to see.

To solve this problem, Eric Platas, webdesigner and SEO expert, highlights seven simple tips to optimize websites in WordPress, but most of the tips apply to any type of website. Try it and you will be able to increase up to 300% loading speed! Platas chose WordPress because, besides being the management platform of most popular websites in the world, is the one with greater support from developer community, which is constantly developing new plugins, tutorials and updates. Depending on the platform you will optimize, you will need to find the plugin or equivalent solution, but the concept behind the tip remains the same.


Google PageSpeed Insights

Let’s start with Google PageSpeed Insights because it is the first step to optimize a site and it will allow you to measure the importance of the next steps. Google PageSpeed Insights is a diagnostic tool that measures the performance of your website and suggests ways to improve your loading speed. It does NOT effectively measures the loading speed, but checks the important aspects for performance and suggests improvements, such as enabling the cache, reduce the size of images or place the JavaScript at the end of the code. With Page Speed, you will learn good development practices that will help you write cleaner and packaged code that will reflect not only on the performance of a website, but will also facilitate website maintenance and will have a positive impact on SEO strategies.


Pingdom Tools

Pingdom Tools, however, is a DNS diagnostic tool that does the opposite of Page Speed ​​Insights. The Pingdom Tools analyzes the HTTP requests made by your site, which is: requests, images, scripts, style sheets and external resources (social widgets, videos, iframes, ajax etc.). The coolest thing is that Pingdom generates a report of all your website files, showing the time the file was requested, how long it took the server to respond, the loading time and when the request is finished. That way, you can identify performance bottlenecks – which, otherwise, you would not notice – such as heavy files, slow server, external scripts (such as Facebook), broken links etc. Another cool feature in Pingdom is that it shows how long it takes to load your website in different parts of the world, because depending on the distance from where your website is hosted, it may load more slowly.


Plugin WP Fastest Cache

When it comes to cache plugins for WordPress, W3 Total Cache is the largest reference, being recommended by hosting companies like GoDaddy, Hostgator and Hackspace. But after facing compatibility issues related to JavaScript in some websites, Platas decided to test other plugins. Then he find out a modest plugin, which has been evaluated only a few hundred times, but all starred with 5. Intrigued, he decided to read the description and found out that the concept behind WP Fastest Cache is simple but very efficient. What it does is to save a copy of static HTML pages, eliminating the need for queries on database and large processes on the server. It also has other features, such as gzip, browser cache, and minimizing HTML, JS and CSS.

Images correspond to more than a half of the traffic of a website, therefore are one of the best ways to optimize loading speed. is a Yahoo! service that reduces the size of images without losing quality. The best of all is that there is also a plugin in WordPress that optimizes the images by the time you upload them and also allows to optimize all images that have already been sent.


Install the essentials only

WordPress has plugins for just about every need of a website, and that’s good. But some plugins are actual performance villains and even the lighter ones generate some extra processing. For this reason, it is important to pay attention to the installed plugins. A useful tip is: disable the plugins that are not needed at the moment and uninstall those that are not in use for a long time. The major performance villains are the plugins that access external servers, such as Disqus comment system or social sharing bars. These plugins need many scripts and style files to run, and it makes the website slow and cumbersome, especially on 3G connections. Just use Pingdom Tools to identify the plugins that are delaying the loading of your site.


Be minimalist

Steve Jobs used to say one thing that I keep as a lesson: “I’m proud of the things I do not do as much of what I do.” That’s why iPod, iPhone and iPad devices have a button. They did not need more than one. Think about it when creating a website. Ask yourself: Do I need to use the image gallery? This layout works only with standard fonts? You will notice that your design decisions will stand out more when you do less, and your websites will be much lighter.



As previously mentioned, the farther your site is hosted, the slower loading. To resolve this problem, there is a service called CDN (Content Delivery Network) which distribute content for servers around the world. Thus, when a user accesses your site, he/she connects to the closest server, making the access much faster. Some CDN will go even further, reducing the size of files and generating cache like Google PageSpeed Service and CloudFlare, and both are free.

Source: iMasters


Read More

Installing MongoDB on AWS

Installing MongoDB on AWS

Rafael Novello, systems analyst, teaches how to install MongoDB on AWS. In his case, MongoDB install is made on EC2, the Amazon virtual server. In this environment it is possible to provision the speed of disks (called EBS) by configuring IOPS; for MongoDB, the faster the disk, the better! However, an important detail is that you can not configure the disk with many IOPS; instead, you need to pre-warm it!

The pre-warming operation is necessary only during the boot, wether is a new disk or a disk created from an image. Without this preparation, the disc may be 5% to 50% slower (yes, up to 50% slower!) and this was one of the factors that made me suffer with query performance for sure. You can read more about pre-warming checking Amazon documentation, but it consists of umount the disk and run the following command:

sudo dd if=/dev/zero of=/dev/<strong>xvdf</strong> bs=1M

Replace the unit in <strong> by the unit you are using and wait — the command can take few hours to complete, depending on the disk size. A good idea is to use Linux screen to not lose work if the session falls.

Another very important point for MongoDB performance which is independent of the used host is the system ulimit setting. Most Linux systems come configured by default to prevent a user or process consumes too much server resources, but sometimes these limits are too low and interfere with MongoDB performance. You can read more about at 10gen documentation page, but the recommendation is as follows:

file size: unlimited
cpu time: unlimited
virtual memory: unlimited
open files: 64000
memory size: unlimited
processes/threads: 64000

With these two settings I was able to dramatically improve the performance of my MongoDB installation, so I think these tips can help everyone.


Hardware allocation

Another point that I realized was very important for MongoDB performance is choosing the right server. When I started to deal with this database, I believed that the disk would be the most important resource and, although not totally wrong, I actually see that the RAM memory is the most important thing here. During database operation, if there is no RAM memory, you will need to take many changes in memory content, increasing disk usage and impairing performance. The recommendation is for at least the indexes fit in memory; you can see that with the stats command on each collection in MongoDB console:

> db.sua_colecao.stats()

The command will show the size of each index in bytes, then just add up and see if it fits in RAM memory. For those who use AWS, it was released a new type of EC2 instance with memory optimization, the R3 instances. They are a great option for MongoDB, as shown in this mongodirector article.


MongoDB version

There are several reasons why is always recommended to use the newest systems versions, but in the case of MongoDB there are important issues for our discussion. For instance, the version 2.4 had global locks in the database, which means that any write operation would block the database completely and no other writing could be made. In version 2.6 the locks came out to be in the database level, so it became possible to make writing and reading operations simultaneously in different databases. In version 2.8, according to this MongoDB blog post, locks will be at the document level, which will cause a huge performance impact on the system.

Source: iMasters

Read More

Drawing Particles Using HTML5 Canvas

canvasCanvas is one of the most fun features of HTML5. “The amount of cool things that can be created is absurdly giant. However, many people find it difficult to learn. But the truth is that it is not”, says Raphael Amorim, developer and open source enthusiastic. “Of course, having a good geometry background is very important. But if you do not know much you can create very simple things to go further”. He shows an example below:

In your HTML file create a simple structure and add a tag canvas with a random class. For this article, the class name will be “particles”. Before closing the tag body, call the JavaScript file, which is named “particles.js”.

<canvas class="particles"></canvas>
<script src="particles.js"></script>


Then, in particles.js, let’s start canvas magic! I’ll explain the code by parts for better understanding – The code is available on GitHub. First, add a function to match the onload window event and select the body and the canvas on it, and apply some styles to the elements. Note that there is no CSS file. I chose to set the styles within JavaScript, but you can do it the way you want. We also define the update function, which will run from a certain range.

window.onload = function() {
var body = document.querySelector(‘body’); = ‘#2C2C44';
canvas = document.getElementById(‘particles’),
ctx = canvas.getContext(‘2d’); = ‘0px’; = ‘0px’; = ‘0px’;
canvas.width = canvas_width;
canvas.height = canvas_height;
draw = setInterval(update, speed);


After that we define some variables, such as the speed of events and canvas size. But keep it clear that it is not good to use global variables, but this use in this experiment is justified by teaching purposes. In this project there is no use of requestAnimationFrame, but I recommend taking a good look on it. With this resource the browser can optimize simultaneous animations in a single stream and reassemble all cycle, leading to greater animation fidelity. For example, serves very well in synchronized animation with CSS or SVG SMIL transitions.

In addition, animations Javascript-based where a animation loop is running on a tab that is not visible, the browser will not keep it running, which means less use of CPU, GPU and memory, leading to much longer battery life. This link provides a good study source for requestAnimationFrame.

// Settings
var speed = 35,
canvas_width = window.innerWidth,
canvas_height = window.innerHeight;


Then there is the definition of other global variables to define a instance in canvas, the number of times that particles have been created, a limit for particles, a list of particles that were created and the colors used. Again, it is not recommended the use of global variables; the computational cost is usually quite high and the application becomes less organized. One of the few cases when applying global scope as variables is advantageous is when the data is constant.

var canvas,
times = 0,
limit = 100,
particles = [],
colors = [‘#F0FD36', ‘#F49FF1', ‘#F53EAC’, ‘#76FBFA’];

If we are creating something that needs randomness in position, size and color of a particle, why not use a function to deliver this data? This is not the best solution, but it is very practical and easy to understand.

var getRand = function(type) {
if (type === ‘size’)
return (Math.floor(Math.random() * 8) * 10)
if (type === ‘color’)
return Math.floor(Math.random() * colors.length)
if (type === ‘pos’)
return [
(Math.floor(Math.random() * 200) * 10),
(Math.floor(Math.random() * 80) * 10)
return false

Okay, now let’s create a generic function to create particles using as a basis the incoming arguments.

var drawParticle = function(x, y, size, color, opacity){
ctx.globalAlpha = opacity;
ctx.arc(x, y, size, 0, 2 * Math.PI);
ctx.fillStyle = color;
ctx.strokeStyle = color;


Remember the update function? The one that was set to run in setInterval within the loaded function in the onload event window? In such a function happens the “magic drawing” of the particles, besides controlling the limit of the particles. Note that for each particle designed, it is also saved an object with individual information on the particle in the particle list.


function update(args) {
var color = colors[getRand(‘color’)],
pos = getRand(‘pos’),
size = getRand(‘size’),
opacity = 1;
drawParticle(pos[0], pos[1], size, color, opacity)
particles.push([pos[0], pos[1], color, opacity, size]);
if (times >= limit) {
draw = setInterval(clean, speed);


So far, the experiment only creates particles on the screen, but when it reaches the limit, it does stop.

There is a function named clean, which is executed when the particle limit is reached within the execution of the update method. In this role it performs the reading of each particle and updates its opacity at a lower value as above, all running for a time period defined above, giving a visual effect of particle fadeOut.


function clean() {
ctx.clearRect(0, 0, canvas_width, canvas_height);
particles.forEach(function(p) {
p[0] = x,
p[1] = y,
p[2] = color
p[3] = globalAlpha,
p[4] = size
p[3] = p[3] — 0.06;
drawParticle(p[0], p[1], p[4], p[2], p[3])
if (p[p.length — 1] && p[3] <= 0.0) {
ctx.clearRect(0, 0, canvas_width, canvas_height);
times = 0;
particles = []
draw = setInterval(update, speed);

Now you can run the experiment in your browser. You will see a simple canvas (you can view it running here too). This code needs refactoring and if you want you can send a Pull Request on GitHub.

Source: iMasters

Read More

Web Semantics: The Web of Meanings and Relationships

Yasodara Cordova is a self-taught developer, enthusiast in open web standards, and holds the following story: Tim Berners-Lee proposed an evolution of the Web in 1994 at CERN Conference (European Organization for Nuclear Research) in Geneva, Switzerland. With not very beautiful drawings, but in a very clear language, the World Wide Web creator argued that the Web at that time stopped being just a linked nodes index to be a “reservoir of meanings”. Something that could bring to the Internet the complexity and beauty of the relationship between humans and machines. Thus first appeared the Data Web Concept, which is basically a network of meanings that correlate to each other. To give meaning to things on the Web you need to teach machines how to read these meanings.

We already know that the browser reads what is written in human language because of the world’s most famous markup language: HTML. It is the HTML that tells the browser what the browser is experiencing and what you are putting on the Internet through it. However, the HTML could not define the meaning and the relationship between resources. To solve this problem, W3C created a domain, or a task force, to develop Web Semantics technologies: the W3C Semantic Web Activity. From this group emerged improvements and standards to evolve the Data Web concisely and also several examples of Semantic Web applied. In 2013, the group was closed and replaced by Date Activity, designed to build the Data Web.

The core roles of Web Semantics – which is: connect, describe and deliver data – were covered by simple technologies connecting URIs (identifiers), representing or describing data with RDF and SPARQL templates to extract answers from relational databases that we want to answer when building semantic applications.

The resource description template in the Web, or RDF, works describing relations to machines and disambiguating the meanings of each node. Thus form the triple, which are small “sentences” that computers can understand and represent from resources that we put on the Web. These resources, connected by meanings, are gradually forming a large cloud of connected resources, as shown in the image below:


At first, efforts have focused on developing the vocabulary needed to produce tags that computers understand. The Linked Data Vocabularies initiative was very important in this context because it offered an ecosystem that grows organically offering a wide variety of vocabularies to describe resources. These vocabularies form ontologies when grouped, which are sets designed to provide concise meanings for specific subjects. The existence of ontologies ensures interoperability among semantic databases. If interoperability exists, it is possible to cross data and experience exchanges between dynamic data streamings with less effort and resources. The evolution of these calendars resulted in the Google-led task force that gathered a lot of search engines, including Bing, Google, Yahoo! and Yandex to create a schema with “microdata” (not to be confused with metadata!). Here is a very nice article on the subject to be inserted into the markup, seeking to improve the results and causing users to have an easier time finding what they want on the Web.

Web Semantics is also called Web of Connected Data because it rests on a myriad of resources and developing technologies to extract value of such data. Many groups strive to bring this reality to the Web, either with recommendations of how to use, or opening the code of web applications, or even coming together to develop new standards that can benefit developers with simplicity and mature technologies. In this sense, the experimental use of such technology initiatives are very welcome. For example, recently the JSON-LD was launched to help developers connect data and give meaning to them. What do not try it?

The W3C Brazil leads today among with IBM and the British government, the Working Group for Best Practices to put data on the Web, a project affectionately known by #dwbp acronym. To join the group meetings you need to be affiliated with the W3C, but to participate in asynchronous discussions by email list, please send a message to with your thoughts on the progress of discussions in the group.



RDF Best Practices


RDF Relationship to Other Formats



Semantic Annotation for WSDL and XML Schema



Source: iMasters


Read More
content top