Setting up PostgreSQL & pgAdmin

What’s the difference between PostgreSQL and pgAdmin?

PostgreSQL is a free-to-use relational database management system (DBMS); pgAdmin is a graphical user interface (GUI) tool that lets users interact with their PostgreSQL database(s). In other words, PostgreSQL is the actual database system, and pgAdmin is a tool that lets users use that system.

How do I know if I already have PostgreSQL installed?

If you’re a Windows user, verify that PostgreSQL is installed by using SQL Shell. In the search bar in the bottom left-hand corner of your desktop:

type in SQL Shell and this will enable you to open SQL Shell. Hit enter four times, and it will automatically apply the credentials you set when you installed PostgreSQL. Finally, type in SELECT version(); and hit enter, and if it displays the PostgreSQL version you’re using, that means that you have PostgreSQL installed:

If you’re using a Mac, open the terminal. Once in the terminal, type the command “postgres –version”, which will show you the version of PostgreSQL you’ve installed on your machine, or it’ll give you an error if you haven’t installed PostgreSQL.

How do I install PostgreSQL?

Visit the PostgreSQL download page and select your operating system from the options it gives you. Then, click on Download the installer:

On the page it brings you to, select the most up-to-date version of PostgreSQL available for your OS. The rest of the installation should be fairly convenient. In fact, it will even install pgAdmin alongside PostgreSQL by default:

If you want to install pgAdmin as well, make sure to check it off when you get to this screen:

How do I install pgAdmin?

If you didn’t choose to install pgAdmin alongside PostgreSQL, you’ll have to install pgAdmin separately. Fortunately, installing pgAdmin is also very straightforward. First, navigate to https://www.pgadmin.org/download/. Then, select your OS from the list of choices, and that’ll bring you to the download screen. Select the most recently updated version. Finally, click the file with the .exe extension:

From there, the installation is very straightforward.

How do I launch pgAdmin once I have it installed?

For Windows users, go to the search bar in the bottom left-hand side of your desktop, and click on Apps on the top ribbon of the window that appears. Then type in pgAdmin, and it should come right up. Hit enter to launch the pgAdmin application:

For Mac users, pgAdmin is accessible in your Applications folder:

PivotTables in Excel

What is a PivotTable?

A Pivot Table is an analytics tool that can quickly answer key business questions. They are excellent at extracting insights from a vast dataset quickly. PivotTables are one of the most efficient and effective ways to evaluate large quantities of data in Excel. By “pivoting” or aggregating a large data table into a condensed, visually appealing format, PivotTables reveal key information that would not be visible by looking at the raw data.

Example

The Lemonade Stand is a timeless example of operating a successful business-illuminating the importance of intelligent marketing, pricing, and financial decisions. It doesn’t matter if you are the CEO or just started yesterday; we’re going to highlight what a Pivot Table can do for your business.





Fortunately, the lemonade data above is formatted correctly. In this case, select your preference for either Excel or Google Sheets, and follow the video to create a Pivot Table.




The 4 Areas of a Pivot Table:

You should see an empty Pivot Table. Your business question will determine which fields (Employee, Shift, Day of the Week, Sales, Weather, Temperature) you want to include. To help decide, we also need to understand the four areas of a Pivot Table.

The 4 areas are Columns, Rows, Values, and Filters:

Columns – This will create a list of all of the unique values in the field you choose, going horizontally across the page.

Rows – This will create a list of all of the unique values in the field you choose, going vertically down the page.

Values – The field that you want to measure in your Pivot Table, aggregated based on what is in the Rows and Columns boxes. For example, sum of Sales, count of Employees, average of Temperature. Typically a numeric field, but not always.

Filters – You can filter the entire PivotTable using whatever fields you put in here. For example, a filter on Weather allows you to display sales of only the sunny days.



Examples:

  1. Let’s say our lemonade venture wants to see our sum of Sales across the Day of the Week for each Employee. Select your preference for either Excel or Google Sheets, and follow the video to create this Pivot Table.




  2. Furthermore, let’s say our lemonade venture wants to filter these Sales for when the Temperature is over 70 degrees. The filter area allows us to do this. For a bonus, we will add some conditional formatting.


A Pivot Table is a great quick visual tool to answer these types of questions. These types of questions fall in the category of observational such as head counts or total sales. From the example, we observe Paige had more sales than Sam. But does that mean Paige is a better employee than Sam? Now on questions like these, we have to be careful. Paige worked a Sunny Friday shift, which contributed to the majority of her sales. A Pivot Table is not always a great tool to prove causation and answer these types of questions. To answer this, we would employ a suite of statistical algorithms.

Data Formatting for a Pivot Table

If the tutorial above was a walk in the park, but you still haven’t created a pivot table, the problem is most likely the formatting of your data. This section will discuss data formatting and data cleaning issues that may prevent you from creating a Pivot Table.




Multiple Data Sources:

Sometimes it is the case that data is collected separately in different sheets or workbooks. Referring to the example above, it could be the case that the data was collected independently for each employee. Following the general rule that all values of the same type need to be in one column would solve this issue. A little copy and paste would accomplish this.

Data Structure:

Separately, there is the issue that the weather data (Sunny, Cloudy, Rainy) was collected such that each type of Weather is a column. This column structure is also known as unstacked data or wide data, which can be preferred when modeling. However, for visualizations such as Pivot Tables, it is best to move these (Sunny, Cloudy, Rainy) under one column, so it is stacked data or long data.

Dirty Data:

So far, we have assumed that each cell’s data has been correctly filled out. In other words, the focus has been on the overall structure of the data or the columns. However, when working on real-world problems, it is most often the case that you have some dirty data. For example, the dates are spelled differently. For this example, it is quite trivial to fix this issue, but when your spreadsheet contains thousands or millions of rows, data cleaning can be tremendously time-consuming. Additionally, we have not covered the importance of number of formats in Excel and Google Sheets. In the video examples above, the Days of Weeks are ordered correctly. Depending on the spreadsheet, sometimes this happens automatically, but sometimes the number format of your data can cause Pivot Table display issues.

Final Remarks

Hopefully, you found this post useful. If you have questions on pivot tables or drawing conclusions based on the observations from a pivot table you have already created, Boxplot Analytics is well versed in data visualizations and statistics. Furthermore, if you are spending valuable time trying to structure, clean, or format your data, please don’t hesitate to contact us. Data cleaning is one of the many services BoxPlot Analytics can provide for your business.

Are Decision-Makers Disempowered by the Modern Data Stack?

Future Data 2020

Many winter moons ago, I (virtually) attended Future Data 2020, a conference about the next generation of data systems. During the conference, I watched an interesting talk given by Tristan Handy, founder and CEO of Fishtown Analytics, called The Modern Data Stack: Past, Present, and Future. During the talk, Tristan discussed a so-called Cambrian explosion of data products built upon data warehouses, such as Amazon Redshift, between 2012 and 2016, as well as his opinion that we are on the precipice of a similar paradigm shift, which he referred to as “the second Cambrian explosion.”

Tristan’s perspective on the modern data stack provided much food for thought; however, the topic I want to explore here stems from a brief comment made about the future of self-service in data-driven decision-making: How those who are not necessarily on the cutting-edge of data science can best leverage their data to make informed decisions.

Office Space

To start this exploration, I will first give a simplified version of a past during which I was not of working age and with which I therefore have no direct experience: Prior to the aforementioned (first) Cambrian explosion, data analysis was primarily carried out using spreadsheets, such as (of course) Microsoft Excel. In many theoretical offices in the 90s and 00s, countless nameless and faceless theoretical analyst/decision-makers spent their Mondays through Fridays bouncing among tens of tens of Excel spreadsheets, adding calculated fields in two-lettered columns and introducing errors for which there would be no record; it was a laugh riot, the analyst/decision-makers earned decent theoretical wages for their time spent, and everyone watched Friends in the evenings without feeling obligated to discuss how problematic it was.

In more recent times, with the advent of modern data warehouses, data storage was able to be better separated from data analysis, and many, many SaaS companies profited off this division on scales not easily understood by humans. So rather than the happy-go-lucky Friends’ era paradigm, with data tabulated in one program with nice little cells and able to be analyzed in that same program by analyst/decision-makers, a number of new business intelligence platforms began to make their way into offices, raining on everyone’s parade, and just because the new guy attended a “conference” about the “future” in “Des Moines.”

The Modern Data Racket

Let me take a step back: In or around his talk (source), Tristan made the following comments:

“How do you democratize self-service? Controversial, but I believe the Modern Data Stack disempowered many decision-makers. Those comfortable w/ Excel feel cut off from the source of truth. What if the spreadsheet interface is actually the correct way?”

Tristan prefaces his claim that the modern data stack has disempowered decision-makers with the warning that his opinion may be controversial, but I would argue that his statement is not controversial at all, mostly because it is unarguably true. With the shift of data storage and analysis away from the flexible and easy-to-use spreadsheet and toward ecosystems such as data lakes, data rivers, data abysses, and data Charybdises, end-users (i.e., the analyst/decision-makers of yore), many of whom primarily use data tools as a means to an end, have likely lost their way.

Put simply, it is not as simple to navigate the modern data stack as it was to navigate acres of spreadsheets. Spreadsheets, with all of their flaws, have almost no learning curve: if you can turn on a computer and open a file, you can navigate a spreadsheet. Furthermore, from the start, you are only a few clicks, keystrokes, and neural connections away from mastering formulas, pivot tables, and visualizations. I hate spreadsheets! — but they are a near-perfect balance of usability and flexibility.

In contrast, while modern business intelligence tools may be built with end-users with varying levels of technical expertise in mind, they tend to have a steeper learning curve. For example, while today’s decision-maker, now stripped of his or her ‘analyst’ status, can likely navigate a dashboard and make decisions based on the information presented, he or she has lost the almost-tactile experience of sifting through the data with his or her own hands.

The Second Law

I know what you are thinking—literal metric tons of decisions were made based on little more than a pie chart from an hours-long presentation that was not put together by the person who had final say in the decision-making process—but please allow me to employ the above generalization to support my next point: Every new data technology moves the decision-maker further downstream from the data source.

Today, the data required by a decision-maker may be located in a neatly designed dashboard, on physical servers, somewhere in the cloud, and/or on the backs of napkins, they may have underwent various transformations and exist in several slightly different forms of varying accuracy and transparency, and most likely, that decision-maker does not know which of these sources contains the data he or she needs to make an optimal decision, nor the processing those data underwent; it is complete chaos, and not the fun and festive Bacchian kind (and I haven’t even spoken of the inherent fuzziness of data).

The more technologies a company implements in its data stack, the more points there are for potential misunderstandings, and the more training individual decision-makers have to undergo to become fluent in the data stack on which they rely. In other words, decision-makers are being disempowered by the increasing complexity of the modern data stack.

So… What now?

I see three possible solutions to the problem of disempowerment:

  • Stop trying to reinvent the wheel and keep spreadsheets around for the long haul;
  • Hire decision-makers who are prepared to use and keep up with the modern data stack; or
  • Promote close collaboration between data experts and decision-makers to support decision-making.

All three options have pros and cons, but I am personally a fan of the third. In the last decade or so, there has been a rapid increase in our ability to store and manipulate data, and spreadsheets alone cannot be expected to fulfill all modern data needs. Similarly, in many industries, decision-makers alone cannot be expected to stay on the cutting-edge of data science. Therefore, it follows that close collaboration between data experts and decision-makers is becoming increasingly necessary in the modern office.

Alternatively, perhaps in time an easy-to-use tool will come along that can be used to both store and analyze data… Oh… wait… that’s the spreadsheet.

Pie Charts

Show/Hide Code  

You toggled code blocks on! Code blocks will appear in gray below. Click the button again to turn them off.

Introduction

It seems as if people are split on pie charts: either you passionately hate them, or you are indifferent. In this article, we are going to explain why pie charts are problematic and, if you fall into the latter category, what you can do when creating pie charts to avoid upsetting those in the former.

Why are pie charts problematic?

They use size to convey information

A pie chart uses the size of a portion (slice) of a circle (pie) to display a numerical variable. This factor is not an issue in and of itself, as many chart types use size to convey information, including bubble charts and bar charts; however, while bubble charts and bar charts use diameter and height, respectively, to convey information, pie charts rely on the angle describing a slice---and the human eye is not very good at recognizing differences in angles.

Suppose we took a survey on people's favorite kinds of pie. In the chart below, it is difficult to see how the categories relate to each other; individually, Cherry and Rhubarb seem to comprise a smaller portion of the pie than either Apple or Pumpkin, but it may not be obvious (without looking at the data) which is the smaller slice.

#Adjusting plot size and margins
options(repr.plot.width=8, repr.plot.height=4)
par(mfrow=c(1,1), mai = c(0.5, 0, 0.75, 0))

#Data for pie chart
x = c(18, 15, 13, 25, 29)
labels = c("Key Lime", "Cherry", "Rhubarb", "Pumpkin", "Apple")
cols = c("greenyellow", "red", "mediumvioletred", "darkorange", "cornsilk")

#Build the pie chart
pie(x, labels, radius = 1, col=cols)

They cannot display many categories well

This issue of conveying size via angle is even more pronounced when many categories are shown in a single pie chart. Furthermore, unlike some charts that are used to display several categories at once, such as bar charts, pie charts depend on differences in color to denote category; therefore, a large palette of colors is necessary, and without proper selection of the palette, the results could be either garish or ambiguous.

#Adjusting plot size and margins
options(repr.plot.width=8, repr.plot.height=4)
par(mfrow=c(1,1), mai = c(0.55, 0, 0.8, 0))

#Data for pie chart
x = c(2, 4, 5, 10, 13, 15, 15, 17, 19)
labels = c("Key Lime", "Pecan", "Cherry", "Blueberry", "Rhubarb", "Lemon Meringue", "Blackberry", "Pumpkin", "Apple")
cols = c("greenyellow", "tan4", "red", "darkblue", "mediumvioletred", "yellow", "black", "darkorange", "cornsilk2")

#Build the pie chart
pie(x, labels, radius = 1, col=cols)

They show parts of a whole

Pie charts represent a whole as its components. Therefore, if your dataset is a subset of a larger dataset (and thus does not represent the whole) or if your dataset consists of independent categories (and thus represents multiple wholes), then a pie chart may not be appropriate.

Pie charts in popular packages

We wouldn't want to assume anyone's opinion on as divisive a topic as the pie chart, but perhaps the disdain for this chart type is best exhibited by the lack of built-in functions for creating them in two very popular data visualization packages: ggplot2 (R) and seaborn (Python). With both packages, a pie chart can be created only through trickery.

Trickery

It is convenient---perhaps a little too convenient---that a pie chart is no more than a single stacked bar displayed in polar coordinates. The code below builds the pie chart shown above, but using ggplot2.

#Adjusting plot size and margins
options(repr.plot.width=8, repr.plot.height=4)
par(mfrow=c(1,1), mai = c(0.55, 0, 0.8, 0))

#Data for the pie chart
values = c(9, 2, 5, 10, 13, 15, 10, 17, 19)
labels = c("Key \nLime", "Pecan", "Cherry", "Blueberry", "Rhubarb", 
           "Lemon \nMeringue", "Blackberry", "Pumpkin", "Apple")
cols = c("Key \nLime"="greenyellow", "Pecan"="tan4", "Cherry"="red", "Blueberry"="darkblue", 
         "Rhubarb"="mediumvioletred", "Lemon \nMeringue"="yellow", "Blackberry"="black", 
         "Pumpkin"="darkorange", "Apple"="cornsilk2")

data = data.frame(labels, values)

#Build the pie chart
ggplot(data, aes(x="", y=values, fill=labels))+
    geom_bar(width = 1, stat = "identity") +
    scale_fill_manual(values=cols) +
    coord_polar("y", start=0) +  #Use polar coordinates
    theme(axis.title=element_blank(),
          axis.text=element_blank(),
          legend.title=element_blank())

What chart types can be used to replace pie charts?

Bar charts

Similar to pie charts, bar charts use size to convey information; however, for bar charts, the height of a rectangle varies, and differences between the heights of bars are easier to recognize than the differences between the angles of portions of a circle. Furthermore, bar charts can be configured to show absolute numbers, percentages, or both!

#Adjusting plot size and margins
options(repr.plot.width=8, repr.plot.height=4)
par(mfrow=c(1,1), mai = c(0.5, 1, 0.2, 1))

#Data for bar chart
values = c(9, 2, 5, 10, 13, 15, 10, 17, 19)
labels = c("Key \nLime", "Pecan", "Cherry", "Blueberry", "Rhubarb", 
           "Lemon \nMeringue", "Blackberry", "Pumpkin", "Apple")

data = data.frame(labels, values)
data = data[order(-values),]

#Build the bar chart
barplot(height=data$values, 
        names.arg=data$labels, 
        ylab="Votes",
        ylim = c(0, 20),
        cex.names=0.7)

Waffle Charts

Waffle charts, which are growing in popularity, use number rather than size to visualize a numerical dimension. The resulting graph is similar to a stacked bar or tree map; however, because each square is a unit, compared to alternatives that rely solely on size, it is easier for a person to confirm if a perceived difference between categories is real without relying on text.

#Adjusting plot size and margins
options(repr.plot.width=8, repr.plot.height=4)
par(mfrow=c(1,1), mai = c(0.5, 1, 0.2, 1))

# Create data
pies = c("Pecan"=2, "Cherry"=5, "Key Lime"=9, "Blueberry"=10, "Blackberry"=10, 
         "Rhubarb"=13, "Lemon Meringue"=15, "Pumpkin"=17, "Apple"=19)

waffle(pies, rows=5, size=1.5, 
       colors=c("tan4", "red", "greenyellow", "darkblue", "black", 
                "mediumvioletred", "yellow", "darkorange", "cornsilk2"),
       xlab="1 square = 1 vote", legend_pos = "bottom")

But what if I don't like the alternatives?

Even though there are many alternatives (e.g., bar charts, stacked bars, waffle charts, lollipop charts, tree maps), pie charts are a familiar chart type to most people, and depending on the audience, familiarity may be an important factor that affects interpretability. So if you want to stick with pie charts, consider taking the following advice.

Limit the number of categories via grouping

To avoid visual clutter and to ensure your pie chart is readable, the number of categories should be small. Therefore, it may be useful to group categories that individually comprise a small proportion of the pie into a single category. Note that, when using this approach, it may be helpful to list the items contained in the derived category. Furthermore, it is best to ensure that the new category does not form the majority of the resulting pie.

Show percentages or absolute numbers (or both) as text

Text can be used to prevent misunderstandings due to ambiguity. By including text information, a person can see if there are differences among the categories. However, if it is necessary to include text, then one can argue that the visualization itself is ineffective (so be prepared to defend your choice of chart type).

#Adjusting plot size and margins
options(repr.plot.width=8, repr.plot.height=4)
par(mfrow=c(1,1), mai = c(0.55, 0, 0.8, 0))

#Data for pie chart
x = c(15, 20, 35, 30)
labels = c("Other (15%)", "Cherry (20%)", "Pumpkin (35%)", "Apple (30%)")
cols = c("black", "red", "darkorange", "cornsilk2")

#Build the pie chart
pie(x, labels, radius = 1, col=cols)

Conclusions

We hope you found our discussion of pie charts informative. While pie charts can be avoided in most cases, they remain a pithy little chart on which many, many people have little to no opinion. However, to avoid a mass uptake of pitchforks and torches, please remember to employ pie charts responsibly and to use caution when including any controversial chart type in your next presentation.

Required libraries

Click the Show/Hide Code button to view the libraries.

library(repr);
library(dplyr);
library(plotly);
library(waffle);
library(ggplot2);
library(RColorBrewer);

Scatter Plots: What they are and how to make the most of them

Protected: Wording Surveys Well Makes Them More Effective: Part 4

This content is password protected. To view it please enter your password below:

Protected: Wording Surveys Well Makes Them More Effective (Part 3)

This content is password protected. To view it please enter your password below:

Protected: Wording Surveys Well Makes Them More Effective (Part 2)

This content is password protected. To view it please enter your password below:

Protected: Wording Surveys Well Makes Them More Effective

This content is password protected. To view it please enter your password below:

Useful Python Snippets

The goal of this blog post is a compilation of little tidbits and code snippets that address common issues when programming for data analysis in Python.

General Snippets

Difference between JSON and XML

This page gives a great example of the difference between data in JSON format and XML format. It shows the exact same data in both formats: https://json.org/example.html


Converting scientific notation into numbers

Converting from scientific notation in a Pandas Dataframe: https://re-thought.com/how-to-suppress-scientific-notation-in-pandas/


Remove ellipses from pandas dataframe preview:

pd.options.display.max_columns = 2000

#If you don't want to make the change permanently for the notebook, 
(e.g., to avoid excessive output in other cells), you can also use 
pd.option_context:

with pd.option_context('display.max_columns', 2000):
     print(df.describe())
#temporarily display all columns
with pd.option_context('display.max_seq_items', None):
    print (df.columns)

Isolate date columns:

datecols2 = []

for item in prod.columns:
    if 'Date' in item:
        datecols2.append(item)
        
datecols2

Add grand total column to a pivot table:

test_df = pd.pivot_table(prod, index="Color", columns="Class", values="ListPrice", aggfunc=np.sum)
test_df['Grand Total'] = test_df.sum(axis=1)
test_df

Comparing group by syntax to pivot table syntax:

prod.groupby(['Class', 'Style']).count()[['Name']]
pd.pivot_table(prod, index=['Class', 'Style'], values="Name", aggfunc="count")

Use apply for multiple columns in a dataframe:

avo.apply(lambda row: row.AveragePrice * row['Total Volume'], axis=1)

Choose an argument for the open function (file i/o)


Install packages in Jupyter Notebook

# Install a pip package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install pytime

Understanding copying objects in python

These two links are excellent at explaining:
https://stackoverflow.com/questions/2612802/how-to-clone-or-copy-a-list
and
https://www.geeksforgeeks.org/copy-python-deep-copy-shallow-copy/


Reverse Dictionary Function

def reverse_dict(lookup_value):
    dictionary = {'george' : 16, 'amber' : 19}
    for key, value in dictionary.items():  
        if value == lookup_value:
            print(key)
            
reverse_dict(19)

What are args and kwargs?


Removing duplicate rows in a dataframe

https://pandas.pydata.org/pandas-docs/version/0.17/generated/pandas.DataFrame.drop_duplicates.html

https://jamesrledoux.com/code/drop_duplicates


This is an extremely important pandas doc page! Indexing & slicing dataframes

https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html


Selecting & looping through parts of a dataframe


Connect to a mySQL database:

import mysql.connector
# Set up your connection to the database
myConnection = mysql.connector.connect( host=
[PUT YOUR HOST NAME HERE], user=[PUT YOUR USERNAME HERE], 
passwd=[PUT YOUR PASSWORD HERE], db=[PUT THE DATABASE NAME HERE] )

# Read the results of a SQL query into a pandas data frame.
my_table = pd.read_sql('SELECT * FROM table_name, con=myConnection)

Connect to postgres database:

import psycopg2
connection = psycopg2.connect(user = 
"your-username-here-keep-quotes",
password = "your-password-here-keep-quotes",
host = "your-host-here-keep-quotes",
port = "5432",
database = "your-database-here-keep-quotes")
cursor = connection.cursor()
cursor.execute("SELECT * FROM django_session;")
record = cursor.fetchone()
print(record)

Connect to the Twitter API

import numpy as np
import pandas as pd
import json
from pandas.io.json import json_normalize
import twitter

# You need to replace all the capital words in brackets with your
# ACTUAL keys. The quotation marks stay but the brackets and 
# capital words must go. 
api = twitter.Api(consumer_key='[CONSUMER KEY GOES HERE]',
                  consumer_secret='[CONSUMER SECRET GOES HERE]',
                  access_token_key='[ACCESS TOKEN KEY GOES HERE]',
                  access_token_secret='[ACCESS TOKEN SECRET]')

# Get the tweet data since that last tweet
# The user_id is for Boxplot's timeline, replace it with your 
# own if you'd like!
user_timeline = api.GetUserTimeline(user_id='959273870023905280')
latest_twitter_data_final = pd.DataFrame()

for i in range(len(user_timeline)):
    rowasdf = \ json_normalize(json.loads(json.dumps(user_timeline[i]._json))) \
    latest_twitter_data_final = pd.concat([latest_twitter_data_final, \ 
rowasdf]).reset_index(drop=True)

latest_twitter_data_final

Loop through a Series and make sure that each subsequent value is greater than or equal to the one before it. If not, set the value equal to the one before it:

previous_value = 0

def previous(current):
   global previous_value
   if current < previous_value:    
       return_value = previous_value
#        previous_value = current
   else:
       return_value = current
   
   previous_value = return_value
   return return_value
choc['Rating'].head(10).apply(previous)

Remove white space in a column

df = pd.DataFrame({'a':[' app le ']})
print(len(df.a[0]))
df.a = df.a.str.strip()
len(df.a[0])

Customizing Matplotlib Visualizations

How to customize the range of the x-axis and rotate the tick marks:

awesome_table1 = pd.pivot_table(data, index='DEGFIELD3', 
columns='REGION2', values='CBSERIAL', aggfunc='count')
#len(list(awesome_table1.index))
awesome_table1.plot(figsize=(18,12));
plt.xticks(range(0,13),list(awesome_table1.index),rotation=-45)

Also see:
https://stackoverflow.com/questions/12608788/changing-the-tick-frequency-on-x-or-y-axis-in-matplotlib
and

https://stackoverflow.com/questions/27671748/how-to-print-y-axis-label-horizontally-in-a-matplotlib-pylab-chart

https://stackoverflow.com/questions/10998621/rotate-axis-text-in-python-matplotlib


Create reusable settings for a chart:

def my_scatterplot(x_txt, y_txt, df, colorcol):
    df.plot(kind='scatter', x=x_txt, y=y_txt, c=colorcol, colormap='winter', figsize=(10,4), s=10, alpha=.5)
    
my_scatterplot('Total Bags', 'AveragePrice', avo, 'type_as_num')

Change the size of all charts in a notebook:

# put this at the top of the notebook:
plt.rcParams["figure.figsize"] = [15, 10]

Example of changing colors and marker types in scatterplots:

colors = ['b', 'c', 'y', 'm', 'r']

en = plt.scatter(books_data[books_data['language_code']=='en']
['average_rating'], books_data[books_data['language_code']=='en']
['ratings_count'], marker='x', color=colors[0])

spa = plt.scatter(books_data[books_data['language_code']=='spa']
['average_rating'], books_data[books_data['language_code']=='spa']
['ratings_count'], color=colors[2])

fre  = plt.scatter(books_data[books_data['language_code']=='fre']
['average_rating'], books_data[books_data['language_code']=='fre']
['ratings_count'], marker='o', color=colors[1])

# a  = plt.scatter(random(10), random(10), marker='o', 
color=colors[2])
# h  = plt.scatter(random(10), random(10), marker='o', 
color=colors[3])
# hh = plt.scatter(random(10), random(10), marker='o', 
color=colors[4])
# ho = plt.scatter(random(10), random(10), marker='x', 
color=colors[4])

plt.legend((en,spa,fre),
          ('English', 'Spanish', 'French'),
          scatterpoints=1,
          loc='lower left',
          ncol=3,
          fontsize=8)

plt.show()

Multiple y axes, and forced axis

# This is one data point we're trying to plot
chart1 = sets.groupby('year')['num_parts'].count() 
# This is the other data point we're trying to plot
chart2 = sets.groupby('year')['num_parts'].mean() 
fig, ax = plt.subplots(sharey='col')  

# Create a MatPlotLib figure & subplot
ax2 = ax.twinx() # ax2 shares X axis with the ax Axes object
#  This is what forces scale on the second Y-axis!!
ax2.set_ylim(bottom=0, top=799)  

# graph both of our data sets, one bar, one line
ax.bar(chart1.index, chart1, color='dodgerblue')
ax2.plot(chart2.index, chart2, color='red')

# Set the size of the resulting figure
fig.set_size_inches(12,8) 

Other Visualizations

A great tutorial for mapping:

https://towardsdatascience.com/mapping-geograph-data-in-python-610a963d2d7f


Side by side boxplots with seaborn:


Set x and y axes for seaborn plots:

ax = sns.barplot(x = 'val', y = 'cat',
              data = fake,
              color = 'black')
ax.set(xlabel='common xlabel', ylabel='common ylabel')
plt.show()

Make a word cloud in the shape of a custom image:

# If using Jupyter Notebook, you need to install the 
# wordcloud module like this

import sys
 !{sys.executable} -m pip install wordcloud

# import the libraries needed

from PIL import Image
 import numpy as np
 import pandas as pd
 from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
 import matplotlib.pyplot as plt

#  import your dataset
prod = pd.read_csv('winemag-data-130k-v2.csv')

# import the image mask
wine_mask = np.array(Image.open("wine_mask.png"))

# generate the word cloud
comment_words = ''

stopwords = set(STOPWORDS)

stopwords.update(["drink", "now", "wine", "flavor", "flavors"])

for val in prod.description.iloc[0:1000]:

    val = str(val)

    tokens = val.split(' ')

for i in range(len(tokens)):
    tokens[i] = tokens[i].lower()
    #print(tokens[i])

for word in tokens:
    comment_words = comment_words + ' ' + word

wordcloud = WordCloud(background_color="floralwhite", 

                      max_words=1000,

                      mask=wine_mask,

                      stopwords=stopwords, 

                      contour_width=3,

                      contour_color='floralwhite').generate(comment_words)

plt.figure(figsize = (48,48), facecolor = None)

plt.imshow(wordcloud, interpolation="bilinear")

plt.axis('off')

plt.tight_layout(pad=0)

plt.title("Frequent Words from Tasters - Wine Form",fontsize = 40,color='gray')

plt.show()

Make a single box and whisker plot with Matplotlib:

fig, axs = plt.subplots(1, 1)
 axs.boxplot(prod['points'])
 axs.set_title('basic plot')
plt.show()

Multiple box and whisker plots with Matplotlib:

# To make side by side box and whisker plots (in this example, get 
# points for each country, and then make a list of those lists). 
# That is what is passed in to the boxplot function:

u = list(prod[prod['country']=='Italy']['points'])
m = list(prod[prod['country']=='Portugal']['points'])
w = list(prod[prod['country']=='Germany']['points'])
final_list = []
final_list.append(u)
final_list.append(m)
final_list.append(w)

fig7, ax7 = plt.subplots()
ax7.set_title('Multiple Samples with Different sizes')
ax7.boxplot(final_list);

Pie Chart:

prod.country_other.value_counts().plot(kind='pie',
   autopct='%1.0f%%', colors=['skyblue', 'lavender', 'lightpink',
   'lightcyan', 'lemonchiffon', 'mistyrose'])
plt.legend(title = 'Wine Country of Origin', loc='best', bbox_to_anchor=(1, 0, 0.5, 1))
plt.figure(figsize=(360, 250))

Bubble plot tutorial


Calculating correlation with a scatterplot

https://stackoverflow.com/questions/41635448/how-can-i-draw-scatter-trend-line-on-matplot-python-pandas/41635626