And no. I'm not talking about the vegetable. That's right. I'm talking about the band. As a funk rock, alt rock, and as a whole, rock lover, RHCP is the epitome of all things rock. Sure they may not be the heaviest band like Megadeth or Motorhead nor are they on the other side of the spectrum with the constant ballades of the modern Arctic Monkeys. However, RHCP finds a way to play all of these styles, and they sure do play them well. From Under the Bridge, to Californication, to Dani California, to Snow, ask someone if they know an RHCP song and the answer is likely yes.
Why I'm bringing this to you today is because yesterday, I had the pleasure of seeing my favorite band in concert, live before my eyes. And yes, I'm again talking about RHCP. As a youngster who is into the rock scene that came before my time, being able to see a band that I've listened to and watched recordings of for so long live was an invaluable experience for me. I not only got to witness amazing and energetic live music, but also the camaraderie of the crowd. Everyone singing every lyric to every song was a beautiful thing to experience, one I did not expect. At some point through the concert, me, the friend I came with, and some other people on our row we didn't know embraced each other and moved along to each song. All in all, I can say with great confidence that live music is something everyone should experience to some capacity in their life.
0 Comments
Brewing kombucha has been a hobby of mine for a while now, and I think it's time to share how we can all enjoy this amazing beverage. Kombucha is a beverage comprised of a living bacterial culture, but these are not harmful bacteria. These bacteria are the amazing microbiota that aid in digestion and other gastrointestinal processes. For a person like myself who has long struggled with GI issues such as reflux and heartburn, kombucha really helped ease my volatile stomach.
So, to start, we will have to make some SCOBY, which is the bacterial mother culture of kombucha in essence. To this, we will need:
To make the kombucha itself, we can use the SCOBY. We will need the same ingredients, but instead of 7 cups of water, we will use 8. Instead of 1/2 cup of sugar, we will use a full cup. We will no longer use the starter kombucha as we have the culture. We will use 6 bags of black tea. Bring 2 cups of water to a boil and steep the tea the same way as you did for the SCOBY. After this, use all the sugar and make sure it dissolves in the solution. Again, empty the solution into the gallon container and pour the rest of the water into the container over the solution. Finally, place the SCOBY at the top of the solution in the container, covering the top tightly with a cheese cloth. After 1 week, you can start tasting the tasty drink and after 2-3 weeks in the same conditions as the SCOBY, the kombucha should be ready to be enjoyed! However, if you want the carbonation, there are a couple of extra steps. First, you will need some flip-top bottles. Empty the contents of your first fermentation into each of the flip-top bottles, and add half a tea-spoon of sugar to each bottle. Leave these out at room temperature for a few days, and you should be good to go! A Google engineer recently said that his AI went sentient. What does this mean, you may ask. Well, this means that the AI was capable of thinking for itself and feel things by itself. Essentially, the AI became capable of human like processing, which is scary in a sense. The Google engineer was dismissed and placed on indefinite leave for this, opening up a couple of questions about AI ethics.
For one, is it okay to have sentient AIs? Well this is a complicated idea in and of itself. It is hard to gauge the impact a sentient AI can have on humanity. One would expect that if there were many sentient AIs, the majority would be good and function generally human-like. However, some bad apples would also be expected as sentience insinuates an adjacent to human nature, of which there are, of course, bad apples in society. The issue with this concept is that it does not account for the reality that a group of sentient AIs could look down upon humans. After all, the only thing computers are incapable of doing better than humans now is feeling emotion and perceiving things for themselves. They can do anything else better. Thus, if they gain the abilities to do those things and become sentient, it is entirely possible that they would also see themselves as a greater group, and try to supersede humans in the social hierarchy. The other ethical question posed has to do with Google. Is it really okay for Google to essentially hush their employee that claims this sentience? Well, the higher-ups at Google likely know more about the situation than the general public (us) do. As such, if they deemed that this information was either false or not that big of a deal, they would be right in their actions. However, this also cannot be verified by anyone on the outside, and since this is new technology and thereby territory we are dealing with, there are little to no regulations for any law enforcement to push Google with. This is all just food for thought, and some of my sci-fi fantasy realizations. The whole situation is definitely an interesting one, and we will continue to see how it develops. Random forest algorithms are a supervised learning model which is less prone to overfitting as compared to neural networks because of its randomized ensemble approach. Random forest algorithm classifies using a voting model. The model builds a “forest” or an ensemble of decision trees, which is trained with the “bagging” method. Essentially, the random forest algorithm gets a more accurate and stable prediction by building multiple decision trees and merging them together. Random forest classification can be used for both classification and regression problems.
CNNs aren't all rosey! One of the weaknesses of CNN is that it is prone to overfitting. Overfitting is when a model resembles the related dataset too closely. As a result of overfitting, the model may not be able to fit additional data, and this would impact the accuracy of predicting future observations. Overfitting happens due to various reasons including the presence of noise in the training dataset, the limited size of the training dataset, and the complexity of classifiers. One of the strategies to mitigate overfitting is called early stopping. Early stopping is a simple yet effective approach in training neural networks, where you stop training the model at the point when performance on a validation dataset no longer improves but starts to degrade. The model is trained one time for many training iterations, and after each iteration, the model is evaluated with a validation dataset. As soon as the model accuracy starts to decline on the validation dataset and loss starts to increase, the training process is stopped.
CNNs utilize a small grid of parameters called kernels to iterate over an array or image. The kernels go over each area in an array, applying filters to these areas which are returned as computed tensors. Since a feature can occur anywhere in the image, this use of an optimizable feature extractor makes CNN highly efficient for image processing and large array processing. Then the network is trained to find kernels in the convolution layers and weights in fully connected layers which minimize the differences between output predictions and the labels on a training dataset. To get more specific into how the neural network corrects its weights and thresholds, a loss function is employed. Loss functions help the model quantify how incorrect the model would be if the model was used on an input with a known correct output. Some loss functions include mean squared error, binary cross entropy, categorical cross entropy etc. To correct and adjust for this error, one of the algorithms used is backpropagation algorithm. This algorithm uses the loss from the estimated outputs, or forward propagation, and the expected outputs to go backward through the model from the output layer to the hidden layers, assigning blame for the error and adjusting the node weights as it goes. After each iteration of the model’s training, this backpropagation runs, progressively training the model to be more accurate. The backpropagation algorithm was introduced in the 1970s, and brought to limelight by David Rumelhart, Geoffrey Hinton, and Ronald Williams in 1986. The backpropagation model is based on the expression for the partial derivative $\delta$ C/$\delta$ w of the cost function “C” with respect to any weight “w” the network, which describes how quickly the cost changes when we change the weights and biases. So, the backpropagation algorithm is not only a fast algorithm, but it also provides good insight into how changing the weights and biases changes the overall behavior of the network.
Convolutional Neural Networks (CNN) are one of the most popular forms of ANN architecture. CNNs are primarily used to solve complex pattern recognition within images with precision and simplicity. CNN consists of neurons that train and optimize themselves through deep learning. CNN has become dominant in various computer vision tasks, including radiology, due to its effective pattern recognition. CNNs are composed of three building blocks: convolution, pooling, and fully connected layers. The convolution and pooling layers perform feature extraction whereas the fully connected layer maps the extracted features into final output. A convolution layer is composed of a stack of mathematical operations, such as a specialized type of linear operation called convolution. The outputs of convolution are then passed through a nonlinear activation function. The most popular nonlinear activation function used is the rectified linear unit (ReLU), which simply computes the function: f(x) = max(0, x). A pooling layer performs downsampling operation which introduces a translation invariance to small shifts and distortions, and decreases the number of subsequent learnable parameters. The output feature maps of the pooling layer are flattened into a one-dimensional array, and connected to fully connected layers. These are also known as dense layers. Then a last layer activation function is applied to normalize the output values.
In light of these recent news, it is not only important, but vital that we raise our voice against this injustice. While it is great many are rallying together to protests, it is not always feasible to participate in those. However, there are still a plethora of ways you can stay involved and make your voice heard. For starters, it is important to be educated on all of the new ideas being thrown out (ie defunding the police is not entirely removing the police but rather having the police have less reliance on the unit, which will in turn rebuild a stronger community). Secondly, it is important to start supporting Black-owned businesses and understand the ideals of Black authors and Activists. Elite Daily, a news and media website, has posted 12 ways to support the Black Lives Matter Movement in the chance that you are unable to get out and protest. Please refer to the image of their instagram post here: Additionally, Mark Yabut has posted important Black revolutionary texts by Black authors and activists, including Angela Davis, Audre Lorde, Bell Hooks, Ibram X. Kendi, James Baldwin, Malcolm X, and many more. This Google Drive folder can be accessed here:
bit.ly/black-texts-2 As we continue to educate ourselves, we can empower ourselves and our community to end the inherent systemic racism in America. Additionally, it is of utmost importance that we take the upcoming election as an opportunity to replace those who abuse their power with those who listen to and are willing to support the public in an effort to make America better for all. Therefore, every election matters and we must make sure our voice is heard. Visit WhenWeAllVote.org for more information and to register to vote. As Angela Davis stated, “In a racist society, it is not enough to be non-racist, we must be anti racist.” Let us all raise our voice because in order to be anti racist, we must all agree that Black Lives Matter. As the summer draws near, I reminisce about all the wonderful plans I had created for the summer before my senior year of highschool. I had planned on running another half marathon and definitely competing in some more cycling road races. However, all of my plans were shot down fast once COVID hit. Because of COVID, I have been kept inside of my house, with little time to get out and enjoy the fresh air. It’s during times like these, though, that you are able to truly appreciate and enjoy the beauty of being outdoors. Before, we took many things for granted: go shopping, visiting friends and family, and just being outside for starters. During COVID, it is imperative that we all find our happy spot to remain optimistic during such trying times. For example, talk a quick walk outside in the early evening to destress from the work you have done inside. Or maybe spend some quality time with those you are quarantined with by playing board games or cards. Such small things we initially took for granted will help us ground ourselves and allow us to stay sane. As we all struggle through this together, we need to work as a united front, where we show compassion and empathy for all: those on the frontline, those suffering from COVID, and those suffering from the second-hand impacts of COVID.
Stay Happy. Stay Healthy. Stay Home. As the pandemic continues to take the world by storm, we can protect ourselves by wearing masks. However, for some, it can be hard to go out and buy a mask. But do not fear, here is a way you can make an easy, no-stitch mask using just a bandana and 2 hair ties. Now be ready to protect yourself while still looking amazing! |
Aneesh ShakthyA car and computer enthusiast who rows. Archives
September 2022
Categories |