ComputerScience/Machine Learning (48) 썸네일형 리스트형 Deep Learning - 2.3 Concise Implementation of Linear Regression In this section, we will implement the linear regression model which we've implemented before by using high-level APIs of deep learning frameworks. Generating Dataset Reading the dataset (minibatches) Defining the Model Initializing Model Parameters Defining the Loss Function Defining the Optimization algorithm Training 1. Generating Dataset Generate the same dataset we've made before. 2. Readin.. Deep Learning - 2.2 Linear Regression Implementation from Scratch Let's implement linear regression only using tensors including the data pipeline, model, loss, minibatch stochastic gradient descent optimizer. Generating Dataset Reading the dataset (minibatches) Initializing Model Parameters Defining the Model Defining the Loss Function Defining the Optimization algorithm Training 1. Generating Dataset Our synthetic datset will be a matrix Our synthetic labels.. Deep Learning - 2.1 Linear Regression Regression problems pop up whenever we want to predict a numerical value. Linear Model Loss Fuction Minibatch Stochastic Gradient Descent vectorization for speed The Normal Distribution and Squared Loss Deep network 1. Linear Model let's say we want to predict y (label, target) through training set x. x(i) represents ith example(sample) of the training dataset. Thoese independent variables x1(i).. Deep Learning - 1.6 Probability Probability is a flexible language for reasoning about our level of certainty Sampling Axioms Random Variable Joint probability Conditional Probability Independence Marginalization Bayse' theorem practice Expectation and Variables 1. Sampling Correspond to the probs, we can randomly draw mulitple samples. conduct 500 groups of experiments where each group draws 10 samples. (sampling) 2. Axioms W.. Deep Learning - 1.5 Automatic Differentiation Deep learning frameworks can automate the calculation of derivatives. Simple Example Backward for Non-Scalar Variables Detaching Computation Gradient of python control flow 1. Simple Example While backpropagation, deep learning framework automatically calcuate gradients. first attach gradients to those variables with respect to which we desire partial derivatives. In this example f = 2(x^2 + y^2.. Deep Learning - 1.4 Calculus Derivative, Differentiable Partial Derivatives Gradients Chain Rule 1. Derivative, Differentiable prime of f(x) is derivative. if derivatidve exists, f(x) is differentiable. Derivative f'(x) can be interpreted as the instantaneous rate of change of f(x) with respect to x. All of following expressions are equal. 2. Partial Derivatives (편미분) To calculate parital derivative of xi, we can simply tre.. Deep Learning - 1.3 Linear Algebra Let's briefly review the subset of basic linear algebra. Scalars Vectors Matrices Tensors Reduction sum mean non-reduction sum cumulated sum Dot products Norms 1. scalars Scalar variables are denoted by lowercase letters. A scalar is represented by a tensor with just one element. 2. vectors A vector is just an array of numbers. We usually denote vectors as bold-faced, lowercased letters. We work.. Deep Learning - 1.2 Data Preprocessing In this chapter we will briefly walk through steps for preprocessing raw data with pandas and converting them into the tensor format. Read dataset Handling missing data Conversion to the tensor format Deletion, Imputation, coversion to Tensor 1. Read dataset Before practice reading .csv file make artificial dataset. Type of data is pandas dataframe, not tensor. 2. Handling missing data NaN is mi.. 이전 1 2 3 4 5 6 다음