Friday, 19 January 2024

Prepare, Clean, Transform, and Load Data using Power BI

 


What you'll learn

Prepare and Clean Data using Power BI

Transform and Load Data using Power BI

Join Free: Prepare, Clean, Transform, and Load Data using Power BI

About this Guided Project

Usually, tidy data is a mirage in a real-world setting. Additionally, before quality analysis can be done, data need to be in a proper format. This project-based course, "Prepare, Clean, Transform, and Load Data using Power BI" is for beginner and intermediate Power BI users willing to advance their knowledge and skills. 

In this course, you will learn practical ways for data cleaning and transformation using Power BI. We will talk about different data cleaning and transformation tasks like splitting, renaming, adding, removing columns. By the end of this 2-hour-long project, you will change data types, merge and append data sets. By extension, you will learn how to import data from the web and unpivot data.
This project-based course is a beginner to an intermediate-level course in Power BI. Therefore, to get the most of this project, it is essential to have a basic understanding of using a computer before you take this project.

Build an Income Statement Dashboard in Power BI

 


What you'll learn

Build an Income statement dashboard in Power BI

Visualize the income statement using cards, table and column charts

Transform & clean data in the Power Query editor

Join Free: Build an Income Statement Dashboard in Power BI

About this Guided Project

In this 1.5 hours long project, we will be creating an income statement dashboard filled with relevant charts and data. Power BI dashboards are an amazing way to visualize data and make them interactive.  We will begin this guided project by importing the data and transforming it in the Power Query editor. We will then visualize the Income Statement using a table, visualize total revenue, operating income and net income using cards and in the final task visualize the year on year growth using clustered column charts. This project is for anyone who is interested in Power BI and data visualization and specially for those who work in accounts and finance departments. By the end of this course, you will be confident in creating financial statement dashboards with many different kinds of visualizations.

Create a Sales Dashboard using Power BI

 


What you'll learn

Build an attractive and interactive sales dashboard with all the necessary visualizations in a black and blue theme

Visualize sales data using bar charts & pie charts

Create interactive maps to visualize sales data by countries and markets

Join Free: Create a Sales Dashboard using Power BI

About this Guided Project

In this 1 hour long project, you will build an attractive and eye-catching sales dashboard using Power BI in a black and blue theme that will make your audience go "wow". We will begin this guided project by importing data. We will then create bar charts and pie charts to visualize the sales data and then position the graphs on the dashboard. In the final tasks, we will create interactive maps to visualize sales data by countries and markets. By the end of this course, you will be confident in creating beautiful dashboards with many different kinds of visualizations.

HR Analytics- Build an HR dashboard using Power BI

 


What you'll learn

Build an attractive and eye-catching HR dashboard

Visualize gender & racial diversity using graphs & charts in Power BI

Explore buttons, themes, filters & slicers to make the dashboard interactive & smart

Join Free: HR Analytics- Build an HR dashboard using Power BI

About this Guided Project

In this 1 hour long project, you will build an attractive and eye-catching HR dashboard using Power BI. We will begin this guided project by importing data & creating an employee demographics page that gives us the overall demographic outlook of the organization. We will then create pie charts and doughnut charts to visualize gender & racial diversity. In the final tasks, we will create an employee detail page that will provide you with all the important information about any employee with just a click. We will also explore buttons, themes, slicers & filters to make the dashboard more interactive & useful. By the end of this course, you will be confident in creating beautiful HR dashboards that you can use for your personal or organizational purpose.

Data-Driven Decisions with Power BI

 


There are 5 modules in this course

New Power BI users will begin the course by gaining a conceptual understanding of the Power BI desktop application and the Power BI service. Learners will explore the Power BI interface while learning how to manage pages and understand the basics of visualizations.  Learners can download a course dataset and engage in numerous hands-on experiences to discover how to import, connect, clean, transform, and model their own data in the Power BI desktop application.

Join Free: Data-Driven Decisions with Power BI

 Learners will investigate reports, learn about workspaces, and practice viewing, creating, and publishing reports to the Power BI service. Finally, learners will become proficient in the creation and utilization dashboards.

Use Power Bi for Financial Data Analysis


 What you'll learn

Navigate and understand the process of importing data into Power Bi.

Use Power Query to clean data before constructing visuals and reports. Determine relationships between data and use reference tables in Power Bi.

Create and design a reporting dashboard with dynamic features. Publish and share your report

Join Free:Use Power Bi for Financial Data Analysis

About this Guided Project

In this project, learners will have a guided look through Power Bi dynamic reports and visualizations for financial data analysis. As you view, load, and transform your data in Power Bi, you will learn which steps are key to making an effective financial report dashboard and how to connect your report for dynamic visualizations. Data reporting and visualization is the most critical step in a financial, business, or data analyst’s functions. The data is only as effective if it can be communicated effectively to key stakeholders in the organization. Effective communication of data starts here.

Build Dashboards in Power BI

 


What you'll learn

Build a Dashboard in Power BI by building a report and visuals.

Build a report with visuals.

Create a dashboard and pin visuals.

Join Free: Build Dashboards in Power BI

About this Guided Project

In this project, you will create a Dashboard in Power BI. You will get data to bring into a model, build several reports, generate informative charts from each report, then choose powerful visuals to highlight on a Dashboard. Your new skills will help you efficiently summarize important information on a one-page dashboard with visual data.

Getting Started with Power BI Desktop

 


What you'll learn

Import and Transform Data with Power BI Desktop

Visualize Data with Power BI Desktop

Join Free: Getting Started with Power BI Desktop

About this Guided Project

In this 2-hour long project-based course, you will learn the basics of using Power BI Desktop software. We will do this by analyzing data on credit card defaults with Power BI Desktop. Power BI Desktop is a free Business Intelligence application from Microsoft that lets you load, transform, and visualize data. You can create interactive reports and dashboards quite easily, and quickly. We will learn some of the basics of Power BI by importing, transforming, and visualizing the data.

This course is aimed at learners who are looking to get started with the Power BI Desktop software. There are no hard prerequisites and any competent computer user should be able to complete the project successfully.

Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

Top 20 Python Tuple Questions

 



What is a tuple in Python?

a) A collection of unordered elements

b) A collection of ordered elements

c) A single element

d) A data type


Question 2:

How do you create an empty tuple in Python?

a) tuple()

b) empty_tuple = ()

c) empty_tuple = tuple()

d) Both b and c


Question 3:

How do you access the first element of a tuple?

a) tuple[0]

b) tuple.first()

c) tuple.first

d) tuple.get(0)


Question 4:

Which of the following statements is used to add an element to a tuple?

a) tuple.insert(0, element)

b) Tuples are immutable, so elements cannot be added once a tuple is created

c) tuple.add(element)

d) tuple.extend(element)


Question 5:

What is the key difference between a tuple and a list in Python?

a) Tuples are mutable, while lists are immutable

b) Tuples are ordered, while lists are unordered

c) Tuples are immutable, while lists are mutable

d) Tuples can contain only numeric elements


Question 6:

How do you check if an element is present in a tuple?

a) element in tuple

b) tuple.contains(element)

c) tuple.exists(element)

d) element.exists(tuple)


Question 7:

What does the tuple.count(element) method do?

a) Counts the total number of elements in the tuple

b) Counts the occurrences of a specific element in the tuple

c) Counts the sum of all elements in the tuple

d) Counts the average value of elements in the tuple


Question 8:

How do you concatenate two tuples in Python?

a) tuple1 + tuple2

b) tuple1.concat(tuple2)

c) concat(tuple1, tuple2)

d) combine(tuple1, tuple2)


Question 9:

How do you create a tuple with a single element?

a) single_tuple = (1)

b) single_tuple = 1,

c) single_tuple = (1,)

d) Both a and b


Question 10:

Which method is used to find the index of the first occurrence of a specified element in a tuple?

a) tuple.index(element)

b) tuple.find(element)

c) tuple.search(element)

d) tuple.loc(element)


Question 11:

What happens when you try to modify an element in a tuple?

a) It is not possible to modify elements in a tuple as they are immutable

b) The element is updated successfully

c) Python raises an exception

d) The element is deleted from the tuple


Question 12:

How do you create a tuple with elements from 1 to 5 in Python?

a) tuple = (1, 2, 3, 4, 5)

b) tuple = range(1, 6)

c) tuple = tuple(1, 6)

d) tuple = (range(1, 6))


Question 13:

What is the purpose of the len() function when used with a tuple?

a) It returns the total number of elements in the tuple

b) It returns the last element of the tuple

c) It returns the length of each element in the tuple

d) It returns the sum of all elements in the tuple


Question 14:

How do you check if two tuples are equal?

a) tuple1.is_equal(tuple2)

b) tuple1 == tuple2

c) tuple1.equals(tuple2)

d) tuple1.equals(tuple2, strict=True)


Question 15:

What is the purpose of the max() function when used with a tuple?

a) It returns the maximum element in the tuple

b) It returns the index of the maximum element in the tuple

c) It returns the sum of all elements in the tuple

d) It returns the average value of elements in the tuple


Question 16:

Which method is used to remove the last element from a tuple?

a) tuple.remove_last()

b) tuple.pop()

c) tuple.delete_last()

d) Tuples are immutable, so elements cannot be removed


Question 17:

How do you convert a list to a tuple in Python?

a) tuple(list)

b) tuple = list

c) tuple.from_list(list)

d) tuple.convert(list)


Question 18:

What is the purpose of the sorted() function when applied to a tuple?

a) Reverses the order of elements in the tuple

b) Sorts the elements of the tuple in ascending order

c) Removes duplicate elements from the tuple

d) Shuffles the elements of the tuple randomly


Question 19:

What does the tuple.index(element) method return if the element is not found in the tuple?

a) None

b) -1

c) 0

d) Raises a ValueError


Question 20:

What is the output of the following code?

my_tuple = (3, 1, 4, 1, 5, 9, 2)

my_tuple.sort()

print(my_tuple)

a) (1, 1, 2, 3, 4, 5, 9)

b) (9, 5, 4, 3, 2, 1, 1)

c) (1, 1, 2, 3, 4, 5, 9, 2)

d) (1, 2, 3, 4, 5, 9)


Answer: 

  1. b) A collection of ordered elements
  2. d) Both b and c
  3. a) tuple[0]
  4. b) Tuples are immutable, so elements cannot be added once a tuple is created
  5. c) Tuples are immutable, while lists are mutable
  6. a) element in tuple
  7. b) Counts the occurrences of a specific element in the tuple
  8. a) tuple1 + tuple2
  9. c) single_tuple = (1,)
  10. a) tuple.index(element)
  11. a) It is not possible to modify elements in a tuple as they are immutable
  12. a) tuple = (1, 2, 3, 4, 5)
  13. a) It returns the total number of elements in the tuple
  14. b) tuple1 == tuple2
  15. a) It returns the maximum element in the tuple
  16. d) Tuples are immutable, so elements cannot be removed
  17. a) tuple(list)
  18. b) Sorts the elements of the tuple in ascending order
  19. d) Raises a ValueError
  20. d) (1, 2, 3, 4, 5, 9)

The Big Book of Small Python Projects: 81 Easy Practice Programs

 


Best-selling author Al Sweigart shows you how to easily build over 80 fun programs with minimal code and maximum creativity.

If you’ve mastered basic Python syntax and you’re ready to start writing programs, you’ll find The Big Book of Small Python Projects both enlightening and fun. This collection of 81 Python projects will have you making digital art, games, animations, counting pro- grams, and more right away. Once you see how the code works, you’ll practice re-creating the programs and experiment by adding your own custom touches.

These simple, text-based programs are 256 lines of code or less. And whether it’s a vintage screensaver, a snail-racing game, a clickbait headline generator, or animated strands of DNA, each project is designed to be self-contained so you can easily share it online.

You’ll create:

• Hangman, Blackjack, and other games to play against your friends or the computer

• Simulations of a forest fire, a million dice rolls, and a Japanese abacus

• Animations like a virtual fish tank, a rotating cube, and a bouncing DVD logo screensaver

• A first-person 3D maze game

• Encryption programs that use ciphers like ROT13 and Vigenère to conceal text

If you’re tired of standard step-by-step tutorials, you’ll love the learn-by-doing approach of The Big Book of Small Python Projects. It’s proof that good things come in small programs!

Hard Copy : The Big Book of Small Python Projects: 81 Easy Practice Programs





Thursday, 18 January 2024

Top 20 Python List Questions




Question 1:

What is a list in Python?

a) A collection of unordered elements

b) A collection of ordered elements

c) A single element

d) A data type


Question 2:

How do you create an empty list in Python?

a) list()

b) empty_list = []

c) empty_list = list()

d) Both b and c


Question 3:

How do you access the first element of a list?

a) list[0]

b) list.first()

c) list.first

d) list.get(0)


Question 4:

Which of the following statements is used to add an element to the end of a list?

a) list.insert(0, element)

b) list.add(element)

c) list.append(element)

d) list.extend(element)


Question 5:

What is the purpose of the len() function when used with a list?

a) It returns the total number of elements in the list

b) It returns the last element of the list

c) It returns the length of each element in the list

d) It returns the sum of all elements in the list


Question 6:

How do you check if an element is present in a list?

a) element in list

b) list.contains(element)

c) list.exists(element)

d) element.exists(list)


Question 7:

What does the list.remove(element) function do?

a) Removes the first occurrence of the specified element from the list

b) Removes all occurrences of the specified element from the list

c) Removes the last element from the list

d) Removes the element at the specified index


Question 8:

How do you reverse the order of elements in a list?

a) list.reverse()

b) list.sort(reverse=True)

c) list.reorder()

d) list.flip()


Question 9:

What is the difference between the append() and extend() methods in Python lists?

a) There is no difference, and the terms are interchangeable

b) append() adds a single element, while extend() adds multiple elements

c) extend() adds a single element, while append() adds multiple elements

d) Both methods are used for removing elements from a list


Question 10:

What is the output of the following code?

my_list = [1, 2, 3]

new_list = my_list * 2

print(new_list)

a) [1, 2, 3, 1, 2, 3]

b) [2, 4, 6]

c) [1, 4, 9]

d) [1, 2, 3, 6, 9]


Question 11:

Which method is used to find the index of the first occurrence of a specified element in a list?

a) list.index(element)

b) list.find(element)

c) list.search(element)

d) list.loc(element)


Question 12:

How do you copy the elements of one list to another list in Python?

a) new_list = old_list.copy()

b) new_list = old_list.clone()

c) new_list = copy(old_list)

d) new_list = old_list[:]


Question 13:

What is the purpose of the pop() method in Python lists?

a) Adds an element to the end of the list

b) Removes the last element from the list and returns it

c) Removes the first occurrence of the specified element

d) Sorts the elements of the list


Question 14:

What is the difference between a list and a tuple in Python?

a) Lists are mutable, while tuples are immutable

b) Lists are immutable, while tuples are mutable

c) Both lists and tuples are mutable

d) Both lists and tuples are immutable


Question 15:

How do you insert an element at a specific index in a list?

a) list.add(index, element)

b) list.insert(index, element)

c) list.insert(element, index)

d) list.put(index, element)


Question 16:

Which method is used to clear all elements from a list?

a) list.clear()

b) list.remove_all()

c) list.delete()

d) list.empty()


Question 17:

What does the sorted() function do when applied to a list?


a) Reverses the order of elements in the list

b) Sorts the elements of the list in ascending order

c) Removes duplicate elements from the list

d) Shuffles the elements of the list randomly


Question 18:

What is the purpose of the count() method in Python lists?

a) Counts the total number of elements in the list

b) Counts the occurrences of a specific element in the list

c) Counts the sum of all elements in the list

d) Counts the average value of elements in the list


Question 19:

How do you create a list of numbers from 1 to 5 in Python?

a) list = [1, 2, 3, 4, 5]

b) list = range(1, 6)

c) list = list(1, 6)

d) list = [range(1, 6)]


Question 20:

What is the output of the following code?

my_list = [3, 1, 4, 1, 5, 9, 2]

my_list.sort()

print(my_list)

a) [1, 1, 2, 3, 4, 5, 9]

b) [9, 5, 4, 3, 2, 1, 1]

c) [1, 1, 2, 3, 4, 5, 9, 2]

d) [1, 2, 3, 4, 5, 9]


Answer : 

Question 1: b) A collection of ordered elements

Question 2: d) Both b and c

Question 3: a) list[0]

Question 4: c) list.append(element)

Question 5: a) It returns the total number of elements in the list

Question 6: a) element in list

Question 7: a) Removes the first occurrence of the specified element from the list

Question 8: a) list.reverse()

Question 9: b) append() adds a single element, while extend() adds multiple elements

Question 10: a) [1, 2, 3, 1, 2, 3]

Question 11: a) list.index(element)

Question 12: d) new_list = old_list[:]

Question 13: b) Removes the last element from the list and returns it

Question 14: a) Lists are mutable, while tuples are immutable

Question 15: b) list.insert(index, element)

Question 16: a) list.clear()

Question 17: b) Sorts the elements of the list in ascending order

Question 18: b) Counts the occurrences of a specific element in the list

Question 19: b) list = range(1, 6)

Question 20: a) [1, 1, 2, 3, 4, 5, 9]

Python Coding challenge - Day 116 | What is the output of the following Python Code?

 


Code:

my_list = [1, 2]

new_list = my_list * 2

print(new_list)

Solution and Explanation: 

The above code creates a new list new_list by repeating the elements of my_list twice using the * operator. Here's the output of the code:

my_list = [1, 2]

new_list = my_list * 2

print(new_list)

Output:

[1, 2, 1, 2]

As you can see, the elements [1, 2] from my_list are repeated, resulting in a new list [1, 2, 1, 2]. The * operator in this context duplicates the elements of the list the specified number of times.

Wednesday, 17 January 2024

How much do you know about Pandas library?



Question 1:

What is Pandas?

a) A species of bear

b) A data manipulation and analysis library in Python

c) A programming language

d) A web development framework


Question 2:

Which of the following data structures is used to store one-dimensional labeled data in Pandas?

a) Series

b) DataFrame

c) Array

d) List


Question 3:

How can you import the Pandas library in Python?

a) import panda

b) import pandas as pd

c) from pandas import *

d) import pd


Question 4:

What is the primary purpose of a Pandas DataFrame?

a) Storing only numerical data

b) Storing two-dimensional labeled data

c) Storing images and multimedia files

d) Storing text data


Question 5:

Which Pandas function is used to read a CSV file into a DataFrame?

a) read_csv()

b) load_csv()

c) import_csv()

d) read_file()


Question 6:

How can you access a specific column in a Pandas DataFrame?

a) By using the column's index

b) By using the column's label or name

c) By using the row number

d) By using the DataFrame's index


Question 7:

What does the head() function do in Pandas?

a) Prints the first few rows of a DataFrame

b) Prints the last few rows of a DataFrame

c) Prints a summary statistics of the DataFrame

d) Prints the shape of the DataFrame


Question 8:

Which Pandas method is used to check for missing values in a DataFrame?

a) find_missing()

b) check_missing()

c) missing_values()

d) isnull()


Question 9:

What is the purpose of the iloc method in Pandas?

a) Selects columns based on their labels

b) Selects rows and columns based on their integer positions

c) Performs element-wise operations on a DataFrame

d) Checks for duplicate values in a DataFrame


Question 10:

How can you drop a column named "Column_A" from a Pandas DataFrame called df?

a) df.remove("Column_A")

b) df.drop("Column_A", axis=1)

c) df.delete("Column_A")

d) df.remove_column("Column_A")


Question 11:

Which Pandas function is used to filter rows based on a condition?

a) filter_rows()

b) select_rows()

c) filter()

d) query()


Question 12:

In Pandas, what is the purpose of the groupby() function?

a) Groups data based on unique values in a column

b) Reverses the order of rows in a DataFrame

c) Computes the mean of each column

d) Reshapes a DataFrame into a pivot table


Question 13:

How can you rename a specific column in a Pandas DataFrame?

a) rename_column()

b) change_column_name()

c) df.rename()

d) df.change_name()


Question 14:

What does the merge() function in Pandas do?

a) Merges two DataFrames based on a specified column or index

b) Adds a new column to a DataFrame

c) Sorts the rows of a DataFrame

d) Reshapes a DataFrame into a long format


Question 15:

Which Pandas function is used to pivot a DataFrame?

a) pivot()

b) reshape()

c) pivot_table()

d) transpose()


Question 16:

What is the purpose of the to_csv() function in Pandas?

a) Converts a DataFrame to a CSV file

b) Converts a CSV file to a DataFrame

c) Checks if a CSV file is valid

d) Counts the number of occurrences of each value in a DataFrame


Question 17:

Which method is used to fill missing values in a Pandas DataFrame with a specific value?

a) fillna()

b) fill_missing()

c) replace()

d) impute()


Question 18:

How can you sort a Pandas DataFrame based on a specific column?

a) df.sort_by("column_name")

b) df.sort("column_name")

c) df.sort_values("column_name")

d) df.order_by("column_name")


Question 19:

What is the purpose of the pivot_table() function in Pandas?

a) Pivots a DataFrame from long to wide format

b) Computes the sum of each column in a DataFrame

c) Transposes a DataFrame

d) Aggregates data based on one or more columns


Question 20:

Which Pandas function is used to calculate summary statistics of a DataFrame?

a) describe()

b) summary()

c) stats()

d) analyze()


Answer : 

Question 1: b) A data manipulation and analysis library in Python
Question 2: a) Series
Question 3: b) import pandas as pd
Question 4: b) Storing two-dimensional labeled data
Question 5: a) read_csv()
Question 6: b) By using the column's label or name
Question 7: a) Prints the first few rows of a DataFrame
Question 8: d) isnull()
Question 9: b) Selects rows and columns based on their integer positions
Question 10: b) df.drop("Column_A", axis=1)
Question 11: d) query()
Question 12: a) Groups data based on unique values in a column
Question 13: c) df.rename()
Question 14: a) Merges two DataFrames based on a specified column or index
Question 15: c) pivot_table()
Question 16: a) Converts a DataFrame to a CSV file
Question 17: a) fillna()
Question 18: c) df.sort_values("column_name")
Question 19: d) Aggregates data based on one or more columns
Question 20: a) describe()

How much do you know about NumPy library?

 



What is NumPy?

a) A programming language

b) A machine learning library

c) A numerical computing library in Python

d) A deep learning framework


Question 2:

Which of the following is the primary purpose of NumPy?

a) Web development

b) Data manipulation and analysis

c) Game development

d) Network programming


Question 3:

What does the term "NumPy" stand for?

a) Numerical Python

b) Nonlinear Python

c) Neural Python

d) Numeral Python


Question 4:

Which of the following is the correct way to import NumPy in Python?

a) import np

b) import numpy as np

c) from numpy import *

d) include numpy


Question 5:

What is the core data structure in NumPy for representing arrays?

a) Lists

b) Tuples

c) Arrays

d) Sets


Question 6:

Which NumPy function is used to create an array with a range of values?

a) numpy.linspace()

b) numpy.range()

c) numpy.aranges()

d) numpy.arange()


Question 7:

What is the result of the following NumPy expression: numpy.zeros(5)?

a) An array with five elements, all set to 1

b) An array with five elements, all set to 0

c) An empty array

d) An array with five elements, all set to 5


Question 8:

Which NumPy function is used to find the mean of an array?

a) numpy.mean()

b) numpy.average()

c) numpy.median()

d) numpy.mean_value()


Question 9:

In NumPy, what does the function numpy.random.rand() do?

a) Generates random integers

b) Generates random floats in the half-open interval [0.0, 1.0)

c) Computes the standard deviation of an array

d) Creates an array with a specified shape and all elements initialized to the same value


Question 10:

What does the term "broadcasting" mean in the context of NumPy?

a) Sending data over a network

b) Extending the dimensions of an array

c) Performing element-wise operations on arrays of different shapes and sizes

d) Converting data types in an array


Question 11:

Which NumPy function is used to perform element-wise multiplication of two arrays?

a) numpy.multiply()

b) numpy.dot()

c) numpy.cross()

d) numpy.product()


Question 12:

How can you reshape a NumPy array with dimensions (4, 5) into a new array with dimensions (2, 10)?

a) array.reshape((2, 10))

b) numpy.resize(array, (2, 10))

c) numpy.reshape(array, (2, 10))

d) array.resize((2, 10))


Question 13:

What is the purpose of the NumPy function numpy.linalg.inv()?

a) Computes the determinant of a matrix

b) Finds the eigenvalues of a matrix

c) Computes the inverse of a matrix

d) Calculates the singular value decomposition of a matrix


Question 14:

Which NumPy function is used to concatenate two or more arrays along a specified axis?

a) numpy.append()

b) numpy.concatenate()

c) numpy.concat()

d) numpy.merge()


Question 15:

How can you find the index of the maximum value in a NumPy array?

a) numpy.max_index()

b) numpy.argmax()

c) numpy.index_max()

d) numpy.maximum_index()


Question 16:

What is the purpose of the NumPy function numpy.fft.fft()?

a) Finds the fast file transfer of an array

b) Computes the fast Fourier transform of an array

c) Calculates the flip-flop transform of an array

d) Performs the fuzzy frequency transformation of an array


Question 17:

Which NumPy function is used to calculate the dot product of two arrays?

a) numpy.dot()

b) numpy.multiply()

c) numpy.cross()

d) numpy.product()


Question 18:

In NumPy, what does the function numpy.where() do?

a) Filters elements of an array based on a condition

b) Finds the index of a specified element in an array

c) Transposes the elements of an array

d) Sorts the elements of an array


Question 19:

What is the purpose of the NumPy function numpy.save()?

a) Saves an array to a text file

b) Saves an array to a binary file in NumPy's .npy format

c) Saves an array to a CSV file

d) Saves an array to an Excel file


Question 20:

How can you find the unique elements in a NumPy array?

a) numpy.unique()

b) numpy.distinct()

c) numpy.uniq()

d) numpy.elements()


Question 1: c) A numerical computing library in Python

Question 2: b) Data manipulation and analysis

Question 3: a) Numerical Python

Question 4: b) import numpy as np

Question 5: c) Arrays

Question 6: d) numpy.arange()

Question 7: b) An array with five elements, all set to 0

Question 8: a) numpy.mean()

Question 9: b) Generates random floats in the half-open interval [0.0, 1.0)

Question 10: c) Performing element-wise operations on arrays of different shapes and sizes

Question 11: a) numpy.multiply()

Question 12: a) array.reshape((2, 10))

Question 13: c) Computes the inverse of a matrix

Question 14: b) numpy.concatenate()

Question 15: b) numpy.argmax()

Question 16: b) Computes the fast Fourier transform of an array

Question 17: a) numpy.dot()

Question 18: a) Filters elements of an array based on a condition

Question 19: b) Saves an array to a binary file in NumPy's .npy format

Question 20: a) numpy.unique()





Managing Machine Learning Projects

 


Build your subject-matter expertise

This course is part of the AI Product Management Specialization

When you enroll in this course, you'll also be enrolled in this Specialization.

Learn new concepts from industry experts

Gain a foundational understanding of a subject or tool

Develop job-relevant skills with hands-on projects

Earn a shareable career certificate

Join Free: Managing Machine Learning Projects

There are 5 modules in this course

This second course of the AI Product Management Specialization by Duke University's Pratt School of Engineering focuses on the practical aspects of managing machine learning projects.  The course walks through the keys steps of a ML project from how to identify good opportunities for ML through data collection, model building, deployment, and monitoring and maintenance of production systems.  Participants will learn about the data science process and how to apply the process to organize ML efforts, as well as the key considerations and decisions in designing ML systems.

At the conclusion of this course, you should be able to:

1) Identify opportunities to apply ML to solve problems for users
2) Apply the data science process to organize ML projects
3) Evaluate the key technology decisions to make in ML system design
4) Lead ML projects from ideation through production using best practices

Python, Bash and SQL Essentials for Data Engineering Specialization

 


What you'll learn

Develop data engineering solutions with a minimal and essential subset of the Python language and the Linux environment

Design scripts to connect and query a SQL database using Python

Use a scraping library in Python to read, identify and extract data from websites 

Join Free: Python, Bash and SQL Essentials for Data Engineering Specialization

Specialization - 4 course series

If you are interested in developing the skills needed to be a data engineer, the Python, Bash and SQL Essentials for Data Engineering Specialization is a great place to start. We live in a world that is driven by big data - from what we search online to the route we take to our favorite restaurant, and everything in between. Businesses and organizations use this data to make decisions that impact the ways in which we navigate our lives. How do engineers collect this data? How can this data be organized so that it can be appropriately analyzed? A data engineer is specialized in this initial step of accessing, cleaning and managing big data.

Data engineers today need a solid foundation in a few essential areas: Python, Bash and SQL. In Python, Bash and SQL Essentials for Data Engineering, we provide a nuts and bolts overview of these fundamental skills needed for entering the world of data engineering. Led by three professional data engineers, this Specialization will provide quick and accessible ways to learn data engineering strategies, give you a chance to practice what you’ve learned in integrated lab exercises, and then immediately apply these techniques in your professional or academic life.

Applied Learning Project

Each course includes integrated lab exercises using Visual Studio Code or Jupyter notebooks that give you an opportunity to practice the Python, Bash and SQL skills with real-world applications covered in each course. For each data engineering solution that you explore, you are also encouraged to create a demo video and GitHub repository of code that can be showcased in your digital portfolio for employers. By the end of this Specialization, you will have the foundational skills necessary to begin tackling more complex data engineering solutions.

Spark, Hadoop, and Snowflake for Data Engineering

 


What you'll learn

Create scalable data pipelines (Hadoop, Spark, Snowflake, Databricks) for efficient data handling.

Optimize data engineering with clustering and scaling to boost performance and resource use.

Build ML solutions (PySpark, MLFlow) on Databricks for seamless model development and deployment.

Implement DataOps and DevOps practices for continuous integration and deployment (CI/CD) of data-driven applications, including automating processes.

Join Free: Spark, Hadoop, and Snowflake for Data Engineering

There are 4 modules in this course

e.g. This is primarily aimed at first- and second-year undergraduates interested in engineering or science, along with high school students and professionals with an interest in programmingGain the skills for building efficient and scalable data pipelines. Explore essential data engineering platforms (Hadoop, Spark, and Snowflake) as well as learn how to optimize and manage them. Delve into Databricks, a powerful platform for executing data analytics and machine learning tasks, while honing your Python data science skills with PySpark. Finally, discover the key concepts of MLflow, an open-source platform for managing the end-to-end machine learning lifecycle, and learn how to integrate it with Databricks.

This course is designed for learners who want to pursue or advance their career in data science or data engineering, or for software developers or engineers who want to grow their data management skill set. In addition to the technologies you will learn, you will also gain methodologies to help you hone your project management and workflow skills for data engineering, including applying Kaizen, DevOps, and Data Ops methodologies and best practices.

With quizzes to test your knowledge throughout, this comprehensive course will help guide your learning journey to become a proficient data engineer, ready to tackle the challenges of today's data-driven world.

MLOps | Machine Learning Operations Specialization

 


What you'll learn

Master Python fundamentals, MLOps principles, and data management to build and deploy ML models in production environments.

Utilize Amazon Sagemaker / AWS, Azure, MLflow, and Hugging Face for end-to-end ML solutions, pipeline creation, and API development.

Fine-tune and deploy Large Language Models (LLMs) and containerized models using the ONNX format with Hugging Face.

Design a full MLOps pipeline with MLflow, managing projects, models, and tracking system features.

Join Free: MLOps | Machine Learning Operations Specialization

Specialization - 4 course series

This comprehensive course series is perfect for individuals with programming knowledge such as software developers, data scientists, and researchers. You'll acquire critical MLOps skills, including the use of Python and Rust, utilizing GitHub Copilot to enhance productivity, and leveraging platforms like Amazon SageMaker, Azure ML, and MLflow. You'll also learn how to fine-tune Large Language Models (LLMs) using Hugging Face and understand the deployment of sustainable and efficient binary embedded models in the ONNX format, setting you up for success in the ever-evolving field of MLOps

Through this series, you will begin to learn skills for various career paths:

1. Data Science - Analyze and interpret complex data sets, develop ML models, implement data management, and drive data-driven decision making.

2. Machine Learning Engineering - Design, build, and deploy ML models and systems to solve real-world problems.

3. Cloud ML Solutions Architect - Leverage cloud platforms like AWS and Azure to architect and manage ML solutions in a scalable, cost-effective manner.

4. Artificial Intelligence (AI) Product Management - Bridge the gap between business, engineering, and data science teams to deliver impactful AI/ML products.

Applied Learning Project

Explore and practice your MLOps skills with hands-on practice exercises and Github repositories.

1. Building a Python script to automate data preprocessing and feature extraction for machine learning models.

2. Developing a real-world ML/AI solution using AI pair programming and GitHub Copilot, showcasing your ability to collaborate with AI.

4. Creating web applications and command-line tools for ML model interaction using Gradio, Hugging Face, and the Click framework.

3. Implementing GPU-accelerated ML tasks using Rust for improved performance and efficiency.

4. Training, optimizing, and deploying ML models on Amazon SageMaker and Azure ML for cloud-based MLOps.

5. Designing a full MLOps pipeline with MLflow, managing projects, models, and tracking system features.

6. Fine-tuning and deploying Large Language Models (LLMs) and containerized models using the ONNX format with Hugging Face. Creating interactive demos to effectively showcase your work and advancements.

Machine Learning Foundations for Product Managers

 


Build your subject-matter expertise

This course is part of the AI Product Management Specialization

When you enroll in this course, you'll also be enrolled in this Specialization.

Learn new concepts from industry experts

Gain a foundational understanding of a subject or tool

Develop job-relevant skills with hands-on projects

Earn a shareable career certificate

Join Free: Machine Learning Foundations for Product Managers

There are 6 modules in this course

In this first course of the AI Product Management Specialization offered by Duke University's Pratt School of Engineering, you will build a foundational understanding of what machine learning is, how it works and when and why it is applied.  To successfully manage an AI team or product and work collaboratively with data scientists, software engineers, and customers you need to understand the basics of machine learning technology.  This course provides a non-coding introduction to machine learning, with focus on the process of developing models, ML model evaluation and interpretation, and the intuition behind common ML and deep learning algorithms.  The course will conclude with a hands-on project in which you will have a chance to train and optimize a machine learning model on a simple real-world problem.

At the conclusion of this course, you should be able to:
1) Explain how machine learning works and the types of machine learning
2) Describe the challenges of modeling and strategies to overcome them
3) Identify the primary algorithms used for common ML tasks and their use cases
4) Explain deep learning and its strengths and challenges relative to other forms of machine learning
5) Implement best practices in evaluating and interpreting ML models

Tuesday, 16 January 2024

DevOps on AWS: Code, Build, and Test

 


What you'll learn

Understand the DevOps philosophies and its lifecycle

Implement and manage continuous delivery systems and methodologies on AWS

How to use the right tools to measure code quality by identifying workflow steps

Join Free: DevOps on AWS: Code, Build, and Test

There are 2 modules in this course

DevOps is the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organizations to better serve their customers and compete more effectively in the market.

DevOps process can be visualized as an infinite loop, comprising these steps: plan, code, build, test, release, deploy, operate, monitor. Throughout each phase, teams collaborate and communicate to maintain alignment, velocity, and quality. This course in the DevOps on AWS specialization focuses on code, build and test parts of the workflow. We will discuss topics such as source control, best practices for Continuous Integration, and how to use the right tools to measure code quality, by identifying workflow steps that could be automated.

Hands-on Machine Learning with AWS and NVIDIA

 


There are 4 modules in this course

Machine learning (ML) projects can be complex, tedious, and time consuming. AWS and NVIDIA solve this challenge with fast, effective, and easy-to-use capabilities for your ML project.

Join Free: Hands-on Machine Learning with AWS and NVIDIA

This course is designed for ML practitioners, including data scientists and developers, who have a working knowledge of machine learning workflows. In this course, you will gain hands-on experience on building, training, and deploying scalable machine learning models with Amazon SageMaker and Amazon EC2 instances powered by NVIDIA GPUs. Amazon SageMaker helps data scientists and developers prepare, build, train, and deploy high-quality ML models quickly by bringing together a broad set of capabilities purpose-built for ML. Amazon EC2 instances powered by NVIDIA GPUs along with NVIDIA software offer high performance GPU-optimized instances in the cloud for efficient model training and cost effective model inference hosting.

In this course, you will first get an overview of Amazon SageMaker and NVIDIA GPUs. Then, you will get hands-on, by running a GPU powered Amazon SageMaker notebook instance. You will then learn how to prepare a dataset for model training, build a model, execute model training, and deploy and optimize the ML model. You will also learn, hands-on, how to apply this workflow for computer vision (CV) and natural language processing (NLP) use cases. After completing this course, you will be able to build, train, deploy, and optimize ML workflows with GPU acceleration in Amazon SageMaker and understand the key Amazon SageMaker services applicable to computer vision and NLP ML tasks.

Introduction to Designing Data Lakes on AWS

 


What you'll learn

Where to start with a Data Lake?

How to build a secure and scalable Data Lake?

What are the common components of a Data Lake?

Why do you need a Data Lake and what it's value?

Join Free: Introduction to Designing Data Lakes on AWS

There are 4 modules in this course

In this class, Introduction to Designing Data Lakes on AWS, we will help you understand how to create and operate a data lake in a secure and scalable way, without previous knowledge of data science! Starting with the "WHY" you may want a data lake, we will look at the Data-Lake value proposition, characteristics and components.

Designing a data lake is challenging because of the scale and growth of data. Developers need to understand best practices to avoid common mistakes that could be hard to rectify. In this course we will cover the foundations of what a Data Lake is, how to ingest and organize data into the Data Lake, and dive into the data processing that can be done to optimize performance and costs when consuming the data at scale. This course is for professionals (Architects, System Administrators and DevOps) who need to design and build an architecture for secure and scalable Data Lake components. Students will learn about the use cases for a Data Lake and, contrast that with a traditional infrastructure of servers and storage.

Getting Started with Data Analytics on AWS

 


What you'll learn

Explain different types of data analyses – descriptive, diagnostic, predictive, prescriptive

Understand how to perform descriptive data analytics in the cloud with typical data sets

How to build simple visualizations in AWS QuickSight to do descriptive analytics (using S3, Cloudtrail, Athena)

Join Free: Getting Started with Data Analytics on AWS

There is 1 module in this course

Learn how to go from raw data to meaningful insights using AWS with this one-week course. Throughout the course, you’ll learn about the fundamentals of Data Analytics from AWS experts.

Start off with an overview of different types of data analytics techniques - descriptive, diagnostic, predictive, and prescriptive before diving deeper into the descriptive data analytics. Then, apply your knowledge with a guided project that makes use of a simple, but powerful dataset available by default in every AWS account: the logs from AWS CloudTrail. The CloudTrail service enables governance, compliance, operational auditing, and risk auditing of your AWS account. Through the project you’ll also get an introduction to Amazon Athena and Amazon QuickSight. And, you’ll learn how to build a basic security dashboard as a simple but practical method of applying your newfound data analytics knowledge.

DevOps on AWS Specialization

 


What you'll learn

Implement DevOps culture and practices in the AWS Cloud

Adopt and enforce Continuous Integration and Continuous

Delivery best practices on AWS

Explore deployment strategies for serverless applications

Join Free: DevOps on AWS Specialization

Specialization - 4 course series

DevOps on AWS specialization teaches you how to use the combination of DevOps philosophies, practices and tools to develop, deploy, and maintain applications in the AWS Cloud. Benefits of adopting DevOps include: rapid delivery, reliability, scalability, security and improved collaboration.

The first course introduces you to essential AWS products, services, and common solutions. The course covers the fundamental concepts of compute, database, storage, networking, monitoring and security that learners and professionals will need to know when working with AWS.

The second course in the specialization discusses topics such as source control, best practices for Continuous Integration, and how to use the right tools to measure code quality, by identifying workflow steps that could be automated.

The third course explains how to improve the deployment process with DevOps methodology, and also some tools that might make deployments easier, such as Infrastructure as Code, or IaC, and AWS CodeDeploy.

Finally, the last course teaches how to use Amazon CloudWatch for monitoring, as well as Amazon EventBridge and AWS Config for continuous compliance. It also covers Amazon CloudTrail and a little bit of Machine Learning for Monitoring operations.

Applied Learning Project

AWS provides a set of flexible services designed to enable companies to more rapidly and reliably build and deliver products using AWS and DevOps practices. These services simplify provisioning and managing infrastructure, deploying application code, automating software release processes, and monitoring your application and infrastructure performance. This specialization has a significant hands-on component involving the AWS Free Tier in which you will explore AWS services and concepts using AWS SDKs, AWS APIs, and the AWS Console.

Monday, 15 January 2024

Clean Architectures in Python (Free PDF)

 

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

What is a software architecture? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Why is it called “clean”? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Why “architectures”? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Why Python? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

About the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Prerequisites and structure of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Typographic conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Why this book comes for free . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Submitting issues or patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

About the author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Changes in the second edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Chapter 01 A day in the life of a clean system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

The data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Advantages of a layered architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Chapter 02 Components of a clean architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Chapter 03 A basic example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Chapter 04 Add a web application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Flask setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Test and create an HTTP endpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

WSGI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Chapter 05 Error management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Request and responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Basic structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Requests and responses in a use case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Request validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Responses and failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Error management in a use case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

Integrating external systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Chapter 06 Integration with a real external system postgres . . . . . . . . . . . . . . . . . . . 89

Decoupling with interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

A repository based on PostgreSQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Label integration tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Create SQLAlchemy classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

Orchestration management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Database fixtures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Integration tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Chapter 07 Integration with a real external system mongodb . . . . . . . . . . . . . . . . . . 114

Fixtures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Docker Compose configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Application configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Integration tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

The MongoDB repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

Chapter 08 Run a production ready system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Build a web stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Connect to a production-ready database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

Changelog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Colophon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

PDF Download: Clean Architectures in Python A practical approach to better software design




Understanding and Visualizing Data with Python

 


What you'll learn

Properly identify various data types and understand the different uses for each  

Create data visualizations and numerical summaries with Python

Communicate statistical ideas clearly and concisely to a broad audience

Identify appropriate analytic techniques for probability and non-probability samples

Join Free: Understanding and Visualizing Data with Python

There are 4 modules in this course

In this course, learners will be introduced to the field of statistics, including where data come from, study design, data management, and exploring and visualizing data. Learners will identify different types of data, and learn how to visualize, analyze, and interpret summaries for both univariate and multivariate data. Learners will also be introduced to the differences between probability and non-probability sampling from larger populations, the idea of how sample estimates vary, and how inferences can be made about larger populations based on probability sampling.

At the end of each week, learners will apply the statistical concepts they’ve learned using Python within the course environment. During these lab-based sessions, learners will discover the different uses of Python as a tool, including the Numpy, Pandas, Statsmodels, Matplotlib, and Seaborn libraries. Tutorial videos are provided to walk learners through the creation of visualizations and data management, all within Python. This course utilizes the Jupyter Notebook environment within Coursera.

Introduction to Structured Query Language (SQL)

 


What you'll learn

Learn about the basic syntax of the SQL language, as well as database design with multiple tables, foreign keys, and the JOIN operation.

Learn to model many-to-many relationships like those needed to represent users, roles, and courses.

Join Free: Introduction to Structured Query Language (SQL)

There are 4 modules in this course

In this course, you'll walk through installation steps for installing a text editor, installing MAMP or XAMPP (or equivalent) and creating a MySql Database. You'll learn about single table queries and the basic syntax of the SQL language, as well as database design with multiple tables, foreign keys, and the JOIN operation. Lastly, you'll learn to model many-to-many relationships like those needed to represent users, roles, and courses.

Survey Data Collection and Analytics Specialization

 


Advance your subject-matter expertise

Learn in-demand skills from university and industry experts

Master a subject or tool with hands-on projects

Develop a deep understanding of key concepts

Earn a career certificate from University of Michigan

Join Free: Survey Data Collection and Analytics Specialization

Specialization - 7 course series

This specialization covers the fundamentals of surveys as used in market research, evaluation research, social science and political research, official government statistics, and many other topic domains. In six courses, you will learn the basics of questionnaire design, data collection methods, sampling design, dealing with missing values, making estimates, combining data from different sources, and the analysis of survey data. In the final Capstone Project, you’ll apply the skills learned throughout the specialization by analyzing and comparing multiple data sources.


Faculty for this specialisation comes from the Michigan Program in Survey Methodology and the Joint Program in Survey Methodology, a collaboration between the University of Maryland, the University of Michigan, and the data collection firm Westat, founded by the National Science Foundation and the Interagency Consortium of Statistical Policy in the U.S.  to educate the next generation of survey researchers, survey statisticians, and survey methodologists. In addition to this specialization we offer short courses, a summer school, certificates, master degrees as well as PhD programs. 



Statistics with Python Specialization

 


What you'll learn

Create and interpret data visualizations using the Python programming language and associated packages & libraries

Apply and interpret inferential procedures when analyzing real data

Apply statistical modeling techniques to data (ie. linear and logistic regression, linear models, multilevel models, Bayesian inference techniques)

Understand importance of connecting research questions to data analysis methods.

Join Free: Statistics with Python Specialization

Specialization - 3 course series

This specialization is designed to teach learners beginning and intermediate concepts of statistical analysis using the Python programming language. Learners will learn where data come from, what types of data can be collected, study data design, data management, and how to effectively carry out data exploration and visualization. They will be able to utilize data for estimation and assessing theories, construct confidence intervals, interpret inferential results, and apply more advanced statistical modeling procedures. Finally, they will learn the importance of and be able to connect research questions to the statistical and data analysis methods taught to them.

Applied Learning Project

The courses in this specialization feature a variety of assignments that will test the learner’s knowledge and ability to apply content through concept checks, written analyses, and Python programming assessments. These assignments are conducted through quizzes, submission of written assignments, and the Jupyter Notebook environment.

User Experience Research and Design Specialization

 


What you'll learn

 Understand the basics of UX design and UX research

 Use appropriate UX research approaches to inform design decisions

 Design a complete product, taking it from an initial concept to an interactive prototype

Join Free: User Experience Research and Design Specialization

Specialization - 6 course series

Integrate UX Research and UX Design to create great products through understanding user needs, rapidly generating prototypes, and evaluating design concepts. Learners will gain hands-on experience with taking a product from initial concept, through user research, ideation and refinement, formal analysis, prototyping, and user testing, applying perspectives and methods to ensure a great user experience at every step.

Applied Learning Project

This Coursera specialization in UX Research and UX Design concludes with a capstone project, in which learners will incorporate UX Research and Design principles to design a complete product, taking it from an initial concept to an interactive prototype.

Sunday, 14 January 2024

Happy Makar Sankranti using Python

 


import turtle


# Set up the turtle
t = turtle.Turtle()
t.speed(2)
t.penup()

# Define scaling factor
scale_factor = 3.5

# Draw the kite
t.fillcolor("orange")
t.begin_fill()
t.goto(0, 100 * scale_factor)
t.goto(-100 * scale_factor, 0)
t.goto(0, 0)
t.end_fill()

t.fillcolor("pink")
t.begin_fill()
t.goto(100 * scale_factor, 0)
t.goto(0, 100 * scale_factor)
t.goto(0, 0)
t.end_fill()

t.fillcolor("cyan")
t.begin_fill()
t.goto(-100 * scale_factor, 0)
t.goto(0, -100 * scale_factor)
t.goto(0, 0)
t.end_fill()

t.fillcolor("green")
t.begin_fill()
t.goto(100 * scale_factor, 0)
t.goto(0, -100 * scale_factor)
t.end_fill()

t.fillcolor("yellow")
t.begin_fill()
t.goto(50 * scale_factor, -150 * scale_factor)
t.goto(-50 * scale_factor, -150 * scale_factor)
t.goto(0, -100 * scale_factor)
t.end_fill()

# Write text
t.penup()
t.goto(200, 100 * scale_factor)
t.write("Happy Makar Sankranti", align="center", font=("Arial", 16, "bold"))

# Close the turtle graphics window when clicked
turtle.exitonclick()

#clcoding.com

How much do you know about Intricacies Classes and Objects in Python?


 


a. A global function can call a class method as well as an instance method.

Answer

True

b. In Python a function, class, method and module are treated as objects.

Answer

True

c. Given an object, it is possible to determine its type and address.

Answer

True

d. It is possible to delete attributes of an object during execution of the

program.

Answer

True

e. Arithmetic operators, Comparison operators and Compound assignment

operators can be overloaded in Python.

Answer

True

f. The + operator has been overloaded in the classes str, list and int.

Answer

False

Saturday, 13 January 2024

A simple text-based guessing game in Python

 


import random


def guess_the_number():
    # Generate a random number between 1 and 100
    secret_number = random.randint(1, 100)

    print("Welcome to the Guess the Number game!")
    print("I have selected a number between 1 and 100. Can you guess it?")

    attempts = 0

    while True:
        try:
            # Get player's guess
            guess = int(input("Enter your guess: "))
            attempts += 1

            # Check if the guess is correct
            if guess == secret_number:
                print(f"Congratulations! You guessed the number in {attempts} attempts.")
                break
            elif guess < secret_number:
                print("Too low! Try again.")
            else:
                print("Too high! Try again.")

        except ValueError:
            print("Please enter a valid number.")

if __name__ == "__main__":
    guess_the_number()



Popular Posts

Categories

100 Python Programs for Beginner (53) AI (34) Android (24) AngularJS (1) Assembly Language (2) aws (17) Azure (7) BI (10) book (4) Books (173) C (77) C# (12) C++ (82) Course (67) Coursera (226) Cybersecurity (24) data management (11) Data Science (128) Data Strucures (8) Deep Learning (20) Django (14) Downloads (3) edx (2) Engineering (14) Excel (13) Factorial (1) Finance (6) flask (3) flutter (1) FPL (17) Google (34) Hadoop (3) HTML&CSS (47) IBM (25) IoT (1) IS (25) Java (93) Leet Code (4) Machine Learning (59) Meta (22) MICHIGAN (5) microsoft (4) Nvidia (3) Pandas (4) PHP (20) Projects (29) Python (932) Python Coding Challenge (364) Python Quiz (25) Python Tips (2) Questions (2) R (70) React (6) Scripting (1) security (3) Selenium Webdriver (3) Software (17) SQL (42) UX Research (1) web application (8) Web development (2) web scraping (2)

Followers

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses