Trending December 2023 # Golang Program To Create Multiple Begin And End Blocks # Suggested January 2024 # Top 18 Popular

You are reading the article Golang Program To Create Multiple Begin And End Blocks updated in December 2023 on the website Daihoichemgio.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Golang Program To Create Multiple Begin And End Blocks

In this article, we will learn to write Go language programs that will create multiple BEGIN and END blocks using curly braces, “conditional statements”, as well as “external functions”.

A block is created using curly braces, the scope of a variable remains inside the block and not outside it.

Algorithm

Step 1 − Create a package main and declare fmt(format package) package in the program where main produces executable codes and fmt helps in formatting input and output.

Step 2 − Create a main function

Step 3 − In the main, create a first block by initializing a variable

Step 4 − Print the a variable on the console for the first block

Step 5 − Create a second block by initializing a variable

Step 6 − Print the value of a variable on the console for the second block

Step 7 − Then, create a third block and initialize a variable in the block

Step 8 − Print the value of the a inside the third block

Step 9 − The print statement is executed the Println function from the fmt package where ln means new line

Example 1

In this Example, we will create a main function and, in that function, we will create multiple blocks using curly braces. In each block print the desired output on the console.

package main import "fmt" func main() { { a := 1 fmt.Println("Value of a inside first block is:", a) } { a := 2 fmt.Println("Value of a inside second block is:", a) } { a := 3 fmt.Println("Value of a inside third block is:", a) } } Output Value of a inside first block is: 1 Value of a inside second block is: 2 Value of a inside third block is: 3 Example 2

In this Example, a main function will be created and, in that function, create three blocks with if conditional statement in each block without any condition by setting the value true.

package main import "fmt" func main() { if true { a := 1 fmt.Println("Value of a inside first block is:", a) } if true { a := 2 fmt.Println("Value of a inside second block is:", a) } if true { a := 3 fmt.Println("Value of a inside third block is:", a) } } Output Value of a inside first block is: 1 Value of a inside second block is: 2 Value of a inside third block is: 3 Example 3

In this Example, we will write a Go language program to create multiple BEGIN and END blocks using three external functions.

package main import "fmt" func main() { firstBlock() secondBlock() thirdBlock() } func firstBlock() { a := 1 fmt.Println("Value of a inside first block is:", a) } func secondBlock() { a := 2 fmt.Println("Value of a inside second block:", a) } func thirdBlock() { a := 3 fmt.Println("Value of a inside third block:", a) } Output Value of a inside first block is: 1 Value of a inside second block: 2 Value of a inside third block: 3 Conclusion

We executed the program of creating multiple BEGIN and END blocks. In the first Example, we used multiple curly braces to create multiple blocks whereas in the second Example we used if conditional statement with no condition and in the third Example we used external functions to execute the program.

You're reading Golang Program To Create Multiple Begin And End Blocks

Golang Program To Create Directories

Golang has internal packages like os and io packages for creating a new directory. Here, we will create a directory using two examples. In the first example we will use os.Mkdir function and in the second example we will use ioutil.WriteFile function to execute the program.

Method 1: Using os.Mkdir Function

In this method, a directory named dirName variable is created using the os.Mkdir function. The permission bits for the new directory are the second input to os.Mkdir, which we set to 0755. (read, write, and execute permissions for the owner and read and execute permissions for others). The program will print “Directory created successfully!” if the directory creation is successful otherwise it will produce an error message.

Syntax os.Mkdir()

The os.Mkdir function in Go helps in creating a new directory with the specified name and permission bits (mode).

Algorithm

Step 1 − Create a package main and declare fmt(format package),and os package in the program where main produces executable codes and fmt helps in formatting input and output.

Step 2 − Create a directoryname variable and assign it to the newdir one wants to create.

Step 3 − Use os.Mkdir function to create a new directory.

Step 4 − If an error persists while creating the directory, print the error on the console using fmt.Println() function where ln means new line and return.

Step 5 − If the directory is created successfully, Print the success message using the statement used in Step4.

Example

In this example we will use os.Mkdir function to create new directory.

package main import ( "fmt" "os" ) func main() { directoryname := "newdir" err := os.Mkdir(directoryname, 0755) if err != nil { fmt.Println(err) return } fmt.Println("Directory created successfully!") } Output Directory created successfully! Method 2: Using io/ioutil Package

In this method, an empty file with the name and permission bits is created using the ioutil.WriteFile function. If the function returns a nil error and the file is successfully created, we will print success. The WriteFile function will create the directory and if an error comes while creating a directory, we will print the error message.

Syntax Ioutil.WriteFile()

The ioutil.WriteFile function in Go is used to write a byte slice to a file.

Algorithm

Step 1 − Create a package main and declare fmt(format package), io/ioutil package in the program where main produces executable codes and fmt helps in formatting input and output.

Step 2 − Create the function main and in that function create a variable directoryname and assign it to newdir.

Step 3 − Use ioutil package function iouti.WriteFile function to create a directory.

Step 4 − If an error comes while creating the directory, print the error on the console and return.

Step 5 − If the directory is created successfully, print the success statement on the console.

Step 6 − The print statement is executed fmt.Println() function where ln means new line.

Example

In this example, we will use io/ioutil package function to create new directory.

package main import ( "fmt" "io/ioutil" ) func main() { directoryname := "newdir" err := ioutil.WriteFile(directoryname, []byte(""), 0755) if err != nil { fmt.Println(err) return } fmt.Println("Directory created successfully!") } Output Directory created successfully! Conclusion

We executed the program of creating a directory using two methods. In the first method we used os.Mkdir function and in the second method we used io/ioutil package to execute the program.

Knowledge Distillation: Theory And End To End Case Study

This article was published as a part of the Data Science Blogathon

on a business problem to classify x-ray images for pneumonia detection.

Image Source: Alpha Coders

What is Knowledge Distillation?

Knowledge Distillation aims to transfer knowledge from a large deep learning model to a small deep learning model. Here size is in the context of the number of parameters present in the model which directly relates to the latency of the model.

Knowledge distillation is therefore a method to compress the model while maintaining accuracy. Here the bigger network which gives the knowledge is called a Teacher Network and the smaller network which is receiving the knowledge is called a Student Network.

 (Image Source: Author, Inspired from Reference [6])

Why make the Model Lighter?

In many applications, the model needs to be deployed on systems that have low computational power such as mobile devices, edge devices. For example, in the medical field, limited computation power systems (example: POCUS – Point of Care Ultrasound) are used in remote areas where it is required to run the models in real-time. From both time(latency) and memory (computation power) it is desirable to have ultra-lite and accurate deep learning models.

But ultra-lite (a few thousand parameters) models may not give us good accuracy. This is where we utilize Knowledge Distillation, taking help from the teacher network. It basically makes the model lite while maintaining accuracy.

Knowledge Distillation Steps

Below are the steps for Knowledge distillation:

3) Train the student network intelligently in coordination with the teacher network: The student network is trained in coordination with the fully trained teacher network. Here forward propagation is done on both teacher and student networks and backpropagation is done on the student network. There are two loss functions defined. One is student loss and distillation loss function. These loss functions are explained in the next paragraph of this article.

 

Knowledge Distillation Mathematical Equations:

(Image Source: Author, Inspired from Reference [7])

Loss Functions for teacher and student networks are defined as below:

Teacher Loss LT: (between actual lables and predictions by teacher network)

LT = H(p,qT)

Total Student Loss LTS :

LTS =  α * Student Loss + Distallation Loss

LTS =  α* H(p,qs) + H(q̃T, q̃S) 

Where,

Distillation Loss = H(

q̃T, q̃S

)

Student Loss = H(p,qS)

Here:

H

: Loss function (Categorical Cross Entropy or KL Divergence)

z

T

and z

S

: pre-softmax logits

q̃T : softmax(zT/t)

q̃S: softmax(zS/t)

alpha (α) and temperature (t) are hyperparameters.

Temperature t is used to reduce the magnitude difference among the class likelihood values.

These mathematical equations are taken from reference [3].

End to End Case Study

Here we will look at a case study where we will implement the knowledge distillation concept in an image classification problem for pneumonia detection.

About Data:

The dataset contains chest x-ray images. Each image can belong to one of three classes:

2) PNEUMONIA_BACTERIA or BACTERIA

3) PNEUMONIA_VIRUS or VIRUS

Let’s get started!!

Importing Required Libraries:

import numpy as np import matplotlib.pyplot as plt import os import pandas as pd import glob import shutil import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Conv2D, Dropout, MaxPool2D, BatchNormalization, Input, Conv2DTranspose, Concatenate from tensorflow.keras.losses import SparseCategoricalCrossentropy, CategoricalCrossentropy from tensorflow.keras.optimizers import Adam from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint import matplotlib.pyplot as plt from tensorflow.keras.preprocessing.image import ImageDataGenerator import cv2 from sklearn.model_selection import train_test_split import random import h5py from IPython.display import display from PIL import Image as im import datetime import random from tensorflow.keras import layers

Downloading the data

The data set is huge. I have randomly selected 1000 images for each class and kept 800 images in train data, 100 images in the validation data, and 100 images in test data for each of the classes. I had zipped this and uploaded this selected data into my google drive.

S. No. Class Train Test Validation

1. Normal 800 800 800

2. BACTERIA 100 100 100

3. VIRUS 100 100 100

Downloading the data from google drive to google colab:

#downloading the data and unzipping it from google.colab import drive drive.mount('/content/drive') !unzip "/content/drive/MyDrive/data_xray.zip" -d "/content/"

Visualizing the images

We will now look at some images from each of the classes.

for i, folder in enumerate(os.listdir(train_path)): for j, img in enumerate(os.listdir(train_path+"/"+folder)): filename = train_path+"/"+folder + "/" + img img= im.open(filename) ax = plt.subplot(3,4,4*i+j+1) ax.set_xlabel(folder+ ' '+ str(img.size[0]) +'x'+ str(img.size[1])) plt.imshow(img, 'gray') ax.set_xlabel(folder+ ' '+ str(img.size[0]) +'x'+ str(img.size[1])) ax.axes.xaxis.set_ticklabels([]) ax.axes.yaxis.set_ticklabels([]) #plt.axis('off') img.close() break

So above sample images suggest that each x-ray image can be of a different size.

Creating Data Generators

We will use Keras ImageDataGenerator for image augmentation. Image augmentation is a tool to get multiple transformed copies of an image. These transformations can be cropping, rotating, flipping. This helps in generalizing the model. This will also ensure that we get the same size (224×224) for each image. Below are the codes for train and validation data generators.

def trainGenerator(batch_size, train_path): datagen = ImageDataGenerator(rescale=1. / 255, rotation_range=5, shear_range=0.02, zoom_range=0.1, brightness_range=[0.7,1.3], horizontal_flip=True, vertical_flip=True, fill_mode='nearest') train_gen = datagen.flow_from_directory(train_path, batch_size=batch_size,target_size=(224, 224), shuffle=True, seed=1, class_mode="categorical" ) for image, label in train_gen: yield (image, label)

def validGenerator(batch_size, valid_path):

datagen = ImageDataGenerator(rescale=1. / 255, )

valid_gen = datagen.flow_from_directory(valid_path, batch_size=batch_size, target_size=(224, 224),shuffle=True, seed=1 )

for image, label in valid_gen:

yield (image, label)

Model 1: Teacher Network

Here we will use the VGG16 model and train it using transfer learning (based on the ImageNet dataset).

We will first define the VGG16 model.

from tensorflow.keras.applications.vgg16 import VGG16

base_model = VGG16(input_shape = (224, 224, 3), # Shape of our images

weights = ‘imagenet’)

Out of the total layers, We will make the first 8 layers untrainable:

len(base_model.layers)

for layer in base_model.layers[:8]:

layer.trainable = False

x = layers.Flatten()(base_model.output) # Add a fully connected layer with 512 hidden units and ReLU activation x = layers.Dense(512, activation='relu')(x) #x = layers.BatchNormalization()(x) # Add a dropout rate of 0.5 x = layers.Dropout(0.5)(x) x = layers.Dense(3)(x) #linear activation to get pre-soft logits model = tf.keras.models.Model(base_model.input, x) opti = Adam(learning_rate=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.001) model.summary()

As we can see, there are 27M parameters in the teacher network.

One important point to note here is that the last layer of the model does not have any activation function (i.e. it has default linear activation). Generally, there would be a softmax activation function in the last layer as this is a multi-class classification problem but here we are using the default linear activation function to get pre-softmax logits. Because these pre-softmax logits will be used along with the student network’s pre-softmax logits in the distillation loss function.

Hence, we are using from_logits = True in the CategoricalCrossEntropy loss function. This means that the loss function will calculate the loss directly from the logits. If we had used softmax activation, then it would have been from_logits = False.

We will now define a callback for the early stopping of the model and run the model.

Running the model:

earlystop = EarlyStopping(monitor='val_acc', patience=5, verbose=1) filepath="model_save/weights-{epoch:02d}-{val_accuracy:.4f}.hdf5" checkpoint = ModelCheckpoint(filepath=filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') callbacks = [earlystop ] vgg_hist = model.fit(train_generator, validation_data = validation_generator, validation_steps=10, steps_per_epoch = 90, epochs = 50, callbacks=callbacks)

Checking the accuracy and loss for each epoch:

import matplotlib.pyplot as plt plt.figure(1) # summarize history for accuracy plt.subplot(211) plt.plot(vgg_hist.history['acc']) plt.plot(vgg_hist.history['val_acc']) plt.title('teacher model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='lower right') # summarize history for loss plt.subplot(212) plt.plot(vgg_hist.history['loss']) plt.plot(vgg_hist.history['val_loss']) plt.title('teacher model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='upper right') plt.show()

Now we will evaluate the model on the test data:

# First, we are going to load the file names and their respective target labels into a numpy array!

from sklearn.datasets import load_files import numpy as np test_dir = '/content/test' def load_dataset(path): data = load_files(path) files = np.array(data['filenames']) targets = np.array(data['target']) target_labels = np.array(data['target_names']) return files,targets,target_labels x_test, y_test,target_labels = load_dataset(test_dir) from keras.utils import np_utils y_test = np_utils.to_categorical(y_test,no_of_classes) # We just have the file names in the x set. Let's load the images and convert them into array. from keras.preprocessing.image import array_to_img, img_to_array, load_img def convert_image_to_array(files): images_as_array=[] for file in files: # Convert to Numpy Array images_as_array.append(tf.image.resize(img_to_array(load_img(file)), (224, 224))) return images_as_array x_test = np.array(convert_image_to_array(x_test)) print('Test set shape : ',x_test.shape) x_test = x_test.astype('float32')/255 # Let's visualize test prediction. y_pred_logits = model.predict(x_test) y_pred = tf.nn.softmax(y_pred_logits) # plot a raandom sample of test images, their predicted labels, and ground truth fig = plt.figure(figsize=(16, 9)) for i, idx in enumerate(np.random.choice(x_test.shape[0], size=16, replace=False)): ax = fig.add_subplot(4, 4, i + 1, xticks=[], yticks=[]) ax.imshow(np.squeeze(x_test[idx])) pred_idx = np.argmax(y_pred[idx]) true_idx = np.argmax(y_test[idx]) ax.set_title("{} ({})".format(target_labels[pred_idx], target_labels[true_idx]), color=("green" if pred_idx == true_idx else "red"))

Calculating the accuracy of the test dataset:

print(model.metrics_names)  loss, acc = model.evaluate(x_test, y_test, verbose = 1) print('test loss = ', loss)  print('test accuracy = ',acc)

 

Model 2 –S

The student network defined here has a series of 2D convolutions and max-pooling layers just like our teacher network VGG16. The only difference is that number of Convolutions filters in the student network is very less in each layer as compared to the teacher network. This would make us achieve our goal to have a very less number of weights (parameters) to be learned in the student network during training.

Defining the student network:

# import necessary layers  

from tensorflow.keras.layers import Input, Conv2D 

from tensorflow.keras.layers import MaxPool2D, Flatten, Dense, Dropout from tensorflow.keras import Model # input input = Input(shape =(224,224,3)) # 1st Conv Block x = Conv2D (filters =8, kernel_size =3, padding ='valid', activation='relu')(input) x = Conv2D (filters =8, kernel_size =3, padding ='valid', activation='relu')(x) x = MaxPool2D(pool_size =2, strides =2, padding ='valid')(x) # 2nd Conv Block x = Conv2D (filters =16, kernel_size =3, padding ='valid', activation='relu')(x) x = Conv2D (filters =16, kernel_size =3, padding ='valid', activation='relu')(x) x = MaxPool2D(pool_size =2, strides =2, padding ='valid')(x) # 3rd Conv block x = Conv2D (filters =32, kernel_size =3, padding ='valid', activation='relu')(x) x = Conv2D (filters =32, kernel_size =3, padding ='valid', activation='relu')(x) #x = Conv2D (filters =32, kernel_size =3, padding ='valid', activation='relu')(x) x = MaxPool2D(pool_size =2, strides =2, padding ='valid')(x) # 4th Conv block x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) #x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) x = MaxPool2D(pool_size =2, strides =2, padding ='valid')(x) # 5th Conv block x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) #x = Conv2D (filters =64, kernel_size =3, padding ='valid', activation='relu')(x) x = MaxPool2D(pool_size =2, strides =2, padding ='valid')(x) # Fully connected layers x = Flatten()(x) #x = Dense(units = 1028, activation ='relu')(x) x = Dense(units = 256, activation ='relu')(x) x = Dropout(0.5)(x) output = Dense(units = 3)(x) #last layer with linear activation # creating the model s_model_1 = Model (inputs=input, outputs =output) s_model_1.summary()

Note that the number of parameters here is only 296k as compared to what we got in the teacher network (27M).

Now we will define the distiller. Distiller is a custom class that we will define in Keras in order to establish coordination/communication with the teacher network.

This Distiller Class takes student-teacher networks, hyperparameters (alpha and temperature as mentioned in the first part of this article), and the train data (x,y) as input. The Distiller Class does forward propagation of teacher and student networks and calculates both the losses: Student Loss and Distillation Loss. Then the backpropagation of the student network is done and weights are updated.

Defining the Distiller:

class Distiller(keras.Model): def __init__(self, student, teacher): super(Distiller, self).__init__() self.teacher = teacher self.student = student def compile( self, optimizer, metrics, student_loss_fn, distillation_loss_fn, alpha=0.5, temperature=2, ): """ Configure the distiller. Args: optimizer: Keras optimizer for the student weights metrics: Keras metrics for evaluation student_loss_fn: Loss function of difference between student predictions and ground-truth distillation_loss_fn: Loss function of difference between soft student predictions and soft teacher predictions alpha: weight to student_loss_fn and 1-alpha to distillation_loss_fn temperature: Temperature for softening probability distributions. Larger temperature gives softer distributions. """ self.student_loss_fn = student_loss_fn self.distillation_loss_fn = distillation_loss_fn self.alpha = alpha self.temperature = temperature def train_step(self, data): # Unpack data x, y = data # Forward pass of teacher teacher_predictions = self.teacher(x, training=False) #model = ... # create the original model teacher_predictions = self.teacher(x, training=False) with tf.GradientTape() as tape: # Forward pass of student # Forward pass of student student_predictions = self.student(x, training=True) # Compute losses student_loss = self.student_loss_fn(y, student_predictions) distillation_loss = self.distillation_loss_fn( tf.nn.softmax(teacher_predictions / self.temperature, axis=1), tf.nn.softmax(student_predictions / self.temperature, axis=1), ) loss = self.alpha * student_loss + distillation_loss # Compute gradients trainable_vars = self.student.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update the metrics configured in `compile()`. # Return a dict of performance results = {m.name: m.result() for m in self.metrics} results.update( {"student_loss": student_loss, "distillation_loss": distillation_loss} ) return results def test_step(self, data): # Unpack the data x, y = data # Compute predictions y_prediction = self.student(x, training=False) # Calculate the loss student_loss = self.student_loss_fn(y, y_prediction) # Update the metrics. # Return a dict of performance results = {m.name: m.result() for m in self.metrics} results.update({"student_loss": student_loss}) return results

Now we will initialize and compile the distiller. Here for the student loss, we are using the Categorical cross-entropy function and for distillation loss, we are using the KLDivergence loss function.

KLDivergence loss function is used to calculate the distance between two probability distributions. By minimizing the KLDivergence we are trying to make student network predict similar to teacher network.

Compiling and Running the Student Network Distiller:

# Initialize and compile distiller distiller = Distiller(student=s_model_1, teacher=model) optimizer=Adam(learning_rate=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.001), metrics=['acc'], student_loss_fn=CategoricalCrossentropy(from_logits=True), distillation_loss_fn=tf.keras.losses.KLDivergence(), alpha=0.5, temperature=2, ) # Distill teacher to student distiller_hist = distiller.fit(train_generator, validation_data = validation_generator, epochs=50, validation_steps=10, steps_per_epoch = 90)

Checking the plot of accuracy and loss for each epoch:

import matplotlib.pyplot as plt plt.figure(1) # summarize history for accuracy plt.subplot(211) plt.plot(distiller_hist.history['acc']) plt.plot(distiller_hist.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='lower right') # summarize history for loss plt.subplot(212) plt.plot(distiller_hist.history['student_loss']) plt.plot(distiller_hist.history['val_student_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='upper right') plt.show() plt.tight_layout()

Checking accuracy on the test data:

print(distiller.metrics_names) acc, loss = distiller.evaluate(x_test, y_test, verbose = 1) print('test loss = ', loss) print('test accuracy = ',acc)

We have got 74% accuracy on the test data. With the teacher network, we had got 77% accuracy. Now we will change the hyperparameter t, to see if we can improve the accuracy in the student network.

Compiling and Running the Distiller with t = 6:

# Initialize and compile distiller distiller = Distiller(student=s_model_1, teacher=model) optimizer=Adam(learning_rate=1e-4, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.001), metrics=['acc'], student_loss_fn=CategoricalCrossentropy(from_logits=True), #distillation_loss_fn=CategoricalCrossentropy(), distillation_loss_fn=tf.keras.losses.KLDivergence(), alpha=0.5, temperature=6, ) # Distill teacher to student distiller_hist = distiller.fit(train_generator, validation_data = validation_generator, epochs=50, validation_steps=10, steps_per_epoch = 90)

Plotting the loss and accuracy for each epoch:

import matplotlib.pyplot as plt plt.figure(1)

# summarize history for accuracy  

plt.subplot(211)  

plt.plot(distiller_hist.history['acc'])  

plt.plot(distiller_hist.history['val_acc'])  

plt.title('model accuracy')  

plt.ylabel('accuracy')  

plt.xlabel('epoch')  

plt.legend(['train', 'valid'], loc='lower right')  

 # summarize history for loss  

plt.subplot(212)  

plt.plot(distiller_hist.history['student_loss'])  

plt.plot(distiller_hist.history['val_student_loss'])  

plt.title('model loss')  

plt.ylabel('loss')  

plt.xlabel('epoch')  

plt.legend(['train', 'valid'], loc='upper right')  

plt.show()

plt.tight_layout()

Checking the test accuracy:

print(distiller.metrics_names) acc, loss = distiller.evaluate(x_test, y_test, verbose = 1) print('test loss = ', loss) print('test accuracy = ',acc)

With t = 6, we have got 75% accuracy which is better than what we got with t = 2.

This way, we can do more iterations by changing the values of hypermeters alpha (α) and temperature (t) in order to get better accuracy.

Model 3: Student Model without Knowledge Distillation

Now we will check the student model without Knowledge Distillation. Here there will be no coordination with the teacher network and there will be only one loss function i.e. Student Loss.

The student model remains the same as the previous ithout distillation.

Compiling and running the model:

opti = Adam(learning_rate=1e-4, beta_1=0.9, beta_2=0.999, ep , decay=0.001) earlystop = EarlyStopping(monitor='val_acc', patience=5, verbose=1) filepath="model_save/weights-{epoch:02d}-{val_accuracy:.4f}.hdf5" checkpoint = ModelCheckpoint(filepath=filepath, monitor='val_acc', verbose=1, save_best_only=True, mode='max') callbacks = [earlystop ] s_model_2_hist = s_model_2.fit(train_generator, validation_data = validation_generator, validation_steps=10, steps_per_epoch = 90, epochs = 50, callbacks=callbacks)

Our model stopped in 13 epochs as we had used early stop callback if there is no improvement in validation accuracy in 5 epochs.

Plotting the loss and accuracy for each epoch:

import matplotlib.pyplot as plt plt.figure(1) # summarize history for accuracy plt.subplot(211) plt.plot(s_model_2_hist.history['acc']) plt.plot(s_model_2_hist.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='lower right') # summarize history for loss plt.subplot(212) plt.plot(s_model_2_hist.history['loss']) plt.plot(s_model_2_hist.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'valid'], loc='upper right') plt.tight_layout() plt.show()

Checking the Test Accuracy:

print(s_model_2.metrics_names)

loss, acc = s_model_2.evaluate(x_test, y_test, verbose = 1)

print(‘test loss = ‘, loss)

print(‘test accuracy = ‘,acc)

Here we are able to achieve 64% accuracy on the test data.

Result Summary:

Below is the comparison of all four models that are made in this case study:

S. No. Model No. of Parameters Hyperparameter Test Accuracy

1 Teacher Model 27 M – 77%

2 Student Model with Distillation 296 k α = 0.5, t = 2 74%

3 Student Model with Distillation 296 k α = 0.5, t = 6 75%

4

Student Model without Distillation

296 k – 64%

As seen from the above table, with Knowledge distillation, we have achieved 75% accuracy with a very lite neural network. We can play around with the hypermeters α and t to improve it further.

Conclusion 

In this article, we saw that Knowledge Distillation can compress a Deep CNN while maintaining the accuracy so that it can be deployed on embedded systems that have less storage and computational power.

We used Knowledge Distillation on the Pneumonia detection problem from x-ray images. By distilling Knowledge from a Teacher Network having 27M parameters to a Student Network having only 0.296M parameters (almost 100 times lighter), we were able to achieve almost the same accuracy. With more hyperparameter iterations and ensembling of multiple students networks as mentioned in reference [3], the performance of the student model can be further improved.

References

1) Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning 2023.

2) Dataset: Kermany, Daniel; Zhang, Kang; Goldbaum, Michael (2023), “Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images for Classification”, Mendeley Data, V2, doi: 10.17632/rscbjbr9sj.2

3) Designing Lightweight Deep Learning Models for Echocardiography View Classification 2023.

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion. 

Related

Golang Program To Compare Elements In Two Slices

In this tutorial, we will learn how to compare elements in two slices. In slices a simple equality comparison is not possible so the slices are compared with their lengths and the elements present in the loop. The output will be printed in the form of Boolean value on the console with the help of fmt.Println() function. Let’s see how to execute this with the help of an example.

Method 1: Using a user-defined function

In this method, we will compare elements in two slices using an external function and, in that function, we will set some conditions, if the slices satisfy those conditions, they will be considered equal else they won’t be considered equal. Let’s have a look to get a better understanding.

Syntax func append(slice, element_1, element_2…, element_N) []T

The append function is used to add values to an array slice. It takes number of arguments. The first argument is the array to which we wish to add the values followed by the values to add. The function then returns the final slice of array containing all the values.

Algorithm

Step 1 − Create a package main and import fmt package in the program.

Step 2 − Create a main function, in it create two slices of type string and call a function named slice_equality with two slices as arguments.

Step 3 − Create a function slice_equality and in that function check if the length of the first slice is not equal to the second slice return false.

Step 4 − In the next case run a for loop till the range of str1 and check if the elements of str2 are equal to str1, if they are not equal return false.

Step 5 − After checking all the conditions set in the algorithm, if not even once false is returned, return true to the function.

Step 6 − Print the Boolean value using fmt.Println() function where ln refers to the next line here.

Example

Golang program to compare elements in two slices using custom function

package main import "fmt" func slice_equality(str1, str2 []string) bool { if len(str1) != len(str2) { return false } for i, str := range str1 { if str != str2[i] { return false } } return true } func main() { str1 := []string{"Goa", "Gujarat"} str2 := []string{"Goa", "Gujarat"} fmt.Println("The slices are equal or not before adding any element:") fmt.Println(slice_equality(str1, str2)) str2 = append(str2, "Mumbai") fmt.Println("The slices are equal or not after adding another element:") fmt.Println(slice_equality(str1, str2)) } Output The slices are equal or not before adding any element: true The slices are equal or not after adding another element: false Method 2: Using built-in function

In this method, we will use reflect.DeepEqual() function to compare two slices recursively. Built-in functions ease our work and shorten the code. The output here will be printed using fmt.Println() function. Let’s have a look and inculcate how to solve this problem.

Syntax reflect.DeepEqual()

This function compares two values recursively. It traverses and check the equality of the corresponding data values at each level. However, the solution is less safe as compared to comparison in loops. Reflect should be used with care and should be used in those cases where it’s of utmost importance.

func append(slice, element_1, element_2…, element_N) []T

The append function is used to add values to an array slice. It takes number of arguments. The first argument is the array to which we wish to add the values followed by the values to add. The function then returns the final slice of array containing all the values.

Algorithm

Step 1 − Create a package main and import fmt and reflect package in the program.

Step 2 − Create a function main and in that function create two slices of type string which are to be compared with each other.

Step 3 − In the first case before adding any new element in the slice, compare the slices using reflect.DeepEqual() function with the slices as parameters.

Step 4 − In the second case add new string in the slice and compare the slices using reflect.DeepEqual() function with the slices as parameters.

Step 5 − The output will be printed using fmt.Prinln() function on the console as a Boolean value.

Example

Golang program to compare elements in two slices using built-in function

package main import ( "fmt" "reflect" ) func main() { str1 := []string{"Goa", "Gujarat"} str2 := []string{"Goa", "Gujarat"} fmt.Println("The strings are equal or not before adding any element:") fmt.Println(reflect.DeepEqual(str1, str2)) str2 = append(str2, "Mumbai") fmt.Println("The strings are equal or not after adding any element:") fmt.Println(reflect.DeepEqual(str1, str2)) } Output The strings are equal or not before adding any element: true The strings are equal or not after adding any element: false Conclusion

In this tutorial, of comparing slices, we used two methods to execute the program. In the first method we used custom function with some conditions and in the second method we used a built-in function named reflect.DeepEqual() function.

Golang Program To Convert Int Type Variables To String

In this tutorial we will learn how to convert int type variables to string variables using Golang programming language.

A string is defined as the sequence of one or more characters (letters, numbers, or symbols). Computer applications frequently use strings as a data type thus, there is a need to convert strings to numbers or numbers to strings at manny places specially when we are using data entered by the user.

Syntax func Itoa(x int) string

Itoa() function in go programming language is used to get the string representation of any integer value here it is depicted by x. This function takes an integer value as an argument and returns the corresponding string representation of that number.

INPUT − int type

OUTPUT − string

Example: Golang program to convert int type variables to strings using iota() function

Using the strconv.Itoa() method from the strconv package in the Go standard library, we can convert numbers to strings. A numeric value will be changed into a string value if it is passed into the method’s parentheses as a number or variable.

Algorithm

STEP 1 − Import the package fmt and strconv.

STEP 2 − Start the function main()

STEP 3 − Initialize the integer variables and assign appropriate values to them.

STEP 4 − Print the data type of variable

STEP 5 − Use the strconv.Itoa() function to convert the integer type value to string.

STEP 6 − Store the result in a separate variable

STEP 7 − Print the result on the screen using fmt.Println() function

Example

package

main

import

(

“fmt”

“strconv”

)

func

main

(

)

{

number

:

=

20

fmt

.

Printf

(

“Type of variable before conversion: %T nValue : %vn”

,

number

,

number

)

result

:

=

strconv

.

Itoa

(

number

)

fmt

.

Printf

(

“Type of variable after conversion : %T nValue : %vn”

,

result

,

result

)

}

Output Type of variable before conversion: int Value : 20 Type of variable after conversion : string Value : 20

The quotation marks around the number 12 indicate that it is now a string value rather than an integer.

Description of the Code

In the above program, we first declare the package main.

We imported the fmt package that includes the files of package fmt. We also imported Package strconv implements conversions to and from string representations of basic data types.

Now we initialize a int type variable and store a value.

Next we call the function Itoa() and pass the respective integer value as an argument to it. It converts the int type variable to a string type variable.

Store the result in a separate variable and print the result on the screen using fmt.Println() function.

To print the result we can contrast the data types of the variable before and after the conversion process.

Example: Golang program to convert int type variables to strings using Sprintf() function: Syntax func Sprintf(format string, a ...interface{}) string

This function returns a formatted string. The first argument should be a string format followed by a variable number of arguments. This function then returns the result as a formatted string format.

Algorithm

STEP 1 − Import the package fmt and strconv package.

STEP 2 − Start the function main()

STEP 3 − Initialize the integer variable and assign value to it.

STEP 4 − Print the type of variable.

STEP 5 − Use the Sprintf() function to convert integer type value to string.

STEP 6 − Print the type of variable again after the conversion process.

Example

package

main

import

(

“fmt”

“reflect”

)

func

main

(

)

{

number

:

=

200

fmt

.

Println

(

“Number =”

,

number

)

var

num

=

reflect

.

TypeOf

(

number

)

fmt

.

Println

(

“Type of variable before conversion =”

,

num

)

result

:

=

fmt

.

Sprintf

(

“%s”

,

number

)

var

res

=

reflect

.

TypeOf

(

result

)

fmt

.

Println

(

“Type of Variable = “

,

res

)

fmt

.

Println

(

“Value =”

,

result

)

}

Output Number = 200 Type of variable before conversion = int Type of Variable = string Value = %!s(int=200) Description of the Code

First declare the package main.

Import the fmt package that allows us to print anything on the screen.

Now initialize a variable called number of type integer and assign appropriate value to it.

Now use Sprintf() function to convert the integer type value to string and pass the integer number as an argument to it.

Store the output in a separate variable here we have named it as result.

Print the data type of the result variable along with the value it possesses on the screen using fmt.Ptintln() function.

Conclusion

We have successfully compiled and executed the Golang program code to convert int type variables to string type variables along with the examples.

Golang Program To Show Overriding Of Methods In Classes

When a method is overridden in Go, a new method with the same name and receiver type as an existing method is created and used in place of the existing one. As a result, Golang may provide polymorphism, allowing different implementations of a same method to be used depending on the type of receiver. Let’ see the execution in examples.

Method 1: Using shape struct

Here, this shape struct will contain an area field and an area method which will return the value of area field. Rectangle and square both inherit the area () method. The output will be their area printed on the console.

Algorithm

Step 1 − Create a package main and declare fmt(format package) package in the program

Step 2 − First, we create a struct Shape with a field called area of type float64 and a method called Area() that returns the field’s value.

Step 3 − The Shape struct is then embedded in a new structure called Rectangle, which also has two float64-type properties called width and height.

Step 4 − Then, we create a function for the Rectangle struct called CalculateArea() that computes the area using the width and height variables and assigns the result to the area field that was inherited from the Shape struct.

Step 5 − The Shape struct is also embedded in the Square struct, which similarly has a field side of type float64.

Step 6 − Then, using the side field to compute the area and assigning the result to the area field inherited from the Shape struct, we define the method CalculateArea() for the Square struct.

Step 7 − To compute the areas of the forms, we construct pointers to instances of the Rectangle and Square structs in the main function and call their respective CalculateArea() functions.

Step 8 − We invoke the Area() function on the Rectangle and Square pointers to obtain the areas of the forms after computing their areas.

Step 9 − The area of rectangle and the area of square is printed on the console using fmt.Println() function where ln means new line.

Example

In this example we will learn how to override methods in class using shape struct.

package main import ( "fmt" ) type Shape struct { area float64 } func (sq *Shape) Area() float64 { return chúng tôi } type Rectangle struct { Shape width float64 height float64 } func (rect *Rectangle) CalculateArea() { rect.area = rect.width * rect.height } type Square struct { Shape side float64 } func (sq *Square) CalculateArea() { sq.area = chúng tôi * chúng tôi } func main() { rect := &Rectangle{width: 16, height: 6} rect.CalculateArea() fmt.Println("Area of rectangle: ", rect.Area()) sq := &Square{side: 8} sq.CalculateArea() fmt.Println("Area of square: ", sq.Area()) } Output Area of rectangle: 96 Area of square: 64 Method 2: Using shape interface

Here, the shapes rectangle and square implement the shape interface by implementing the area method. In the end, the output will be the area of square and rectangle.

Algorithm

Step 1 − Create a package main and declare fmt(format package) package in the program

Step 2 − In the beginning, we create an interface called Shape with a function called Area() that returns a float64 value.

Step 3 − We then define a structure called Rectangle, which has two float64-type attributes called width and height.

Step 4 − The Area() method for the Rectangle struct is then implemented by creating a function with the same signature as the Shape interface and computing the area using the width and height parameters.

Step 5 − In a similar manner, we create a structure called Square with a field of type float64.

Step 6 − Then, using the side field to determine the area this time, we implement the Area() method for the Square struct.

Step 7 − The Rectangle struct’s Area() method is called in the main function, and it returns the area of the rectangle.

Step 8 − Additionally, we construct an instance of the Square struct and use its Area() method to obtain the square’s area.

Step 9 − The area of both the shapes is printed on the console using fmt.Println() function where ln means new line.

Example

In this example we will learn how to override methods in class using shape interface.

package main import ( "fmt" ) type Shape interface { Area() float64 } type Rectangle struct { width float64 height float64 } func (rect Rectangle) Area() float64 { return rect.width * rect.height } type Square struct { side float64 } func (sq Square) Area() float64 { return chúng tôi * chúng tôi } func main() { rect := Rectangle{width: 16, height: 6} fmt.Println("Area of rectangle: ", rect.Area()) sq := Square{side: 8} fmt.Println("Area of square: ", sq.Area()) } Output Area of rectangle: 96 Area of square: 64 Conclusion

We executed the program of showing how to override methods in class using two examples. In the first example we used shape struct and in the second example we used shape interface.

Update the detailed information about Golang Program To Create Multiple Begin And End Blocks on the Daihoichemgio.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!