top of page

Search Results

94 results found with an empty search

  • Understanding the Roles and Responsibilities of SOC Analysts

    In today's digital landscape, the importance of cybersecurity cannot be overstated. As cyber threats continue to evolve and become more sophisticated, organizations are increasingly relying on Security Operations Center (SOC) Analysts to protect their data and systems. This blog post will delve into the critical roles and responsibilities of SOC Analysts, shedding light on what it takes to succeed in this dynamic field. What is a SOC Analyst? A Security Operations Center (SOC) Analyst is a cybersecurity professional responsible for monitoring, detecting, and responding to security threats and incidents. They play a crucial role in safeguarding an organization's information systems by analyzing security alerts, investigating incidents, and implementing security measures. A dedicated SOC Analyst working diligently at their workstation. SOC Analysts are often the first line of defense against cyberattacks. They work in a highly collaborative environment, often as part of a larger team that includes IT professionals, cybersecurity experts, and incident responders. Due to the increasing rates of data breaches and cyber threats, the demand for SOC Analysts has risen significantly, making it a promising career choice. The Daily Responsibilities of a SOC Analyst A typical day for a SOC Analyst can be demanding, requiring a wide array of skills and knowledge. Their core responsibilities generally include: Monitoring Security Systems SOC Analysts continuously monitor security information and event management (SIEM) tools and other security systems for alerts and indicators of compromise. This involves keeping a close eye on network traffic, user activity, and system logs. Incident Response When a potential threat is detected, SOC Analysts must quickly assess the situation. They determine the severity of the incident and decide on appropriate responsive actions. This may include isolating affected systems, gathering forensic evidence, or escalating the issue to higher-level security personnel. The critical alert system indicating potential threats in a SOC environment. Threat Hunting Besides reacting to incidents, SOC Analysts engage in proactive threat hunting. This means searching for potential vulnerabilities and threats before they can cause harm. By understanding the tactics, techniques, and procedures (TTPs) used by attackers, Analysts can better defend their organization against future attacks. Documentation and Reporting Documentation is another vital aspect of a SOC Analyst’s role. They are responsible for maintaining detailed records of security incidents, response actions taken, and overall system health. Regular reporting to management and stakeholders is crucial for improving security measures and informing future incident response strategies. Continuous Learning and Adaptation The field of cybersecurity is ever-evolving. SOC Analysts must stay up-to-date with the latest threats, vulnerabilities, and technologies. This involves continuous education, participation in training programs, and obtaining relevant certifications. Resources like the soc analyst guide can be invaluable for these ongoing learning efforts. Essential Skills for SOC Analysts To excel as a SOC Analyst, certain skills are essential: Technical Skills A solid foundation in IT and cybersecurity is necessary. SOC Analysts should be proficient in: Network security protocols Firewalls and intrusion detection systems Incident response frameworks Security scripting languages Analytical Skills SOC Analysts must possess strong analytical abilities to effectively assess security threats and incidents. They need to interpret vast amounts of data and identify patterns that may indicate suspicious activities. Communication Skills Effective communication is critical for SOC Analysts. They need to explain complex security issues to team members and non-technical stakeholders clearly. Writing accurate reports and documentation is also a significant part of their role. Problem-Solving Abilities When facing security incidents, SOC Analysts must think quickly on their feet. They need to devise effective solutions under pressure and adapt to rapidly changing situations. A SOC team collaborating on incident response strategies. The Importance of SOC Analysts in Organizations The role of SOC Analysts is not just about fighting cyber threats. Their work is vital for the overall health of an organization's cybersecurity posture. Here are some key reasons why SOC Analysts are indispensable: Reducing Response Time By monitoring security systems in real-time, SOC Analysts drastically reduce the response time to potential threats. Rapid response actions can often prevent minor incidents from escalating into significant breaches. Enhancing Security Awareness SOC Analysts also help cultivate a security-aware culture within the organization. They often conduct training sessions and workshops to educate employees about cybersecurity best practices. Strengthening Compliance Many organizations face compliance requirements regarding data security and privacy. SOC Analysts can assist in ensuring that the organization meets these legal and regulatory standards, thereby reducing the risk of penalties. Improving Incident Management Through their documentation and reporting efforts, SOC Analysts help organizations continuously improve their incident management processes. Analyzing past incidents enables the development of better response plans for future events. Career Path and Development for SOC Analysts Understanding the trajectory for a career as a SOC Analyst can guide aspiring professionals in their journey. Most SOC Analysts start their careers in junior positions, such as security technician or IT support roles, before moving into more advanced positions. Certifications and Education Professional certifications can significantly enhance a SOC Analyst's credibility and knowledge. Some recognized certifications include: CompTIA Security+ Networking and Mentorship Networking within the cybersecurity community can open up opportunities for growth and learning. Aspiring SOC Analysts should consider joining forums, attending conferences, and seeking mentorship from experienced professionals in the field. The Future of SOC Analysts in Cybersecurity As organizations rely more on digital platforms, the demand for skilled SOC Analysts will continue to grow. Technology will likely introduce new tools and automation solutions in this area, enabling SOC Analysts to work more efficiently. However, human expertise will remain irreplaceable in strategic decision-making and critical thinking roles. A Continuous Learning Journey The landscape of cybersecurity is constantly changing. SOC Analysts must be lifelong learners to keep pace with emerging threats and technologies. Investing in continuous education and training is essential for every SOC Analyst looking to thrive in this field. Final Thoughts The role of SOC Analysts is crucial in today's cybersecurity landscape. Their responsibilities encompass monitoring, incident response, rigorous documentation, and keeping abreast of the latest trends and threats. Organizations that invest in skilled SOC Analysts are better prepared to defend against cyber risks. Emphasizing ongoing learning and adaptation will empower these analysts and their organizations to navigate the complex web of cybersecurity challenges effectively.

  • Cybersecurity Side Hustles: Writing Thought Leadership

    Is This The Best Way To Monetize Your Cybersecurity Knowledge In 2025 ?? Cybersecurity Side Hustles: Writing Thought Leadership Last year I published a guide on how to become a cybersecurity writer a nd make some good side-income This year I want to highlight another key opportunity for Cybersecurity writers in 2025 That is .. writing thought leadership content for LinkedIn. This is Cybersecurity Side Hustles: Writing Thought Leadership. LinkedIn is no longer just a job-hunting platform — it’s the new engine for lead generation and brand building CISOs, CEOs, and other business leaders want to be known as thought leaders on the platform For cybersecurity professionals with good writing skills, this presents a golden opportunity : getting paid handsomely to ghostwrite thought-provoking cybersecurity content for busy executives. Why LinkedIn Is the New Platform for Thought Leadership LinkedIn has evolved into the most powerful platform for professionals and businesses. For cybersecurity executives, publishing high-quality content on LinkedIn is critical for three reasons: CISOs and CEOs are under constant pressure to project expertise and authority in cybersecurity. By sharing insightful posts, articles, and newsletters, they can establish themselves as thought leaders and trusted voices in the industry. Businesses now look to LinkedIn as a key source for finding potential partners, service providers, and talent. Consistent, valuable content attracts leads and builds trust. Thought leadership differentiates CISOs and their organizations in a competitive industry. However, thought leadership only works if the content is original, thought-provoking, and does not read like something copied straight from ChatGPT . The problem? Time. Most executives simply don’t have the time to sit down and create high-quality, engaging content regularly. That’s where you can come in. What Is Cybersecurity Ghostwriting? Cybersecurity ghostwriting is writing content - posts, articles, whitepapers, or reports - on behalf of cybersecurity leaders or businesses. The content is published under the executive’s name, but you’re the one who creates it behind the scenes. Executives are hungry for content that: Highlights industry trends and insights (e.g., Zero Trust, AI security risks, NIS2 compliance). Resonates with a broader audience without boring them to tears in technical jargon. Showcases their leadership, ideas, and company’s value. Does not sound like AI-generated fluff with 500 emojis thrown in Your role as a cybersecurity ghostwriter is to bridge the gap between technical expertise and good storytelling . If you’re a cybersecurity professional with strong writing skills, this is one of the most lucrative side hustles you can start. How to Position Yourself as a Cybersecurity Ghostwriter 1. Build Your Writing Portfolio on Medium or LinkedIn If you’re new to writing, the first step is to build a credible portfolio. Start publishing insightful cybersecurity content regularly: Medium : Write in-depth guides, opinion pieces, and analysis of cybersecurity trends. Medium publications can help you reach a larger audience. LinkedIn : Share weekly posts or start a LinkedIn Newsletter focused on cybersecurity topics you’re passionate about. The goal here is to showcase your ability to write clear, engaging, and thought-provoking content - skills that C-level executives value. A great way is to get the Linkedin Top Voice Blue Badge. Consistency and high-quality content can get you noticed and amplify your authority. 2. Develop Your Copywriting Skills Cybersecurity writing isn’t just about facts and figures. To stand out, you need to learn how to write persuasive copy that: Captures attention (with strong headlines and hooks). Simplifies complex ideas (avoiding jargon overload). Provides value and actionable insights. Invest in learning copywriting basics . Study the writing style of successful ghostwriters, take online courses, or read books on the topic. Strong copywriting will set you apart from writers who only focus on technical accuracy. 3. Pitch Yourself to C-Level Executives Once you have a small portfolio, it’s time to monetize. Here’s how to find clients: A. Use LinkedIn to Find Leads Search for CISOs, CTOs, or cybersecurity leaders who post inconsistently but still want to build their brand. Engage with their content: leave insightful comments, share their posts, and start building a relationship. After a few weeks, send a direct pitch : Highlight your expertise in cybersecurity. Showcase your writing portfolio. Explain how you can help them build their thought leadership presence on LinkedIn. Sample Pitch : “Hi [Executive Name], I’ve noticed your insightful posts on [cybersecurity topic]. As a cybersecurity professional and writer, I help leaders like you create engaging, original content that builds authority on LinkedIn. I’d love to chat about how I can support your thought leadership goals. Here’s a link to my recent work: [Portfolio Link].” 2 - Freelance Platforms and Cold Outreach Create a professional profile on Fiverr, Upwork, or Contently showcasing your cybersecurity and writing expertise. Reach out to cybersecurity companies or PR agencies that represent C-level leaders. Offer competitive rates initially to secure your first few clients. Once you build credibility, you can charge premium rates. Why Cybersecurity Professionals Should Jump In Now Cybersecurity ghostwriting is a high-demand, high-value niche for several reasons: There is a shortage of writers who understand cybersecurity deeply enough to write accurate, insightful content. AI tools like ChatGPT can create generic content, but businesses and executives crave authentic, human-driven perspectives . Thought leadership is more critical than ever for executives who want to differentiate themselves in a crowded industry. For cybersecurity professionals, ghostwriting is a perfect fit because: You already have the subject matter expertise . The demand for original cybersecurity content is only increasing. It’s a side hustle that can pay exceptionally well per article depending on complexity and client. Key Takeaways for Aspiring Cybersecurity Ghostwriters Start writing consistently on LinkedIn and Medium to showcase your skills. Develop storytelling and copywriting skills to engage audiences. Leverage LinkedIn to connect with CISOs, CEOs, and cybersecurity companies. Thought leadership writing is about building relationships and trust — both take time but are immensely rewarding. Cybersecurity ghostwriting is one of the most underrated yet lucrative opportunities for professionals in the field. As companies and executives increasingly turn to LinkedIn for brand building and lead generation, the need for original, high-quality content will only grow. By combining your technical expertise with writing skills, you can carve out a profitable niche and build a reputation as the go-to ghostwriter for cybersecurity thought leadership. So stop waiting - start writing. Good luck on your Cybersecurity side hustles for 2025 !

  • How to Harden Windows

    At-home Windows Hardening Security Project Hanging out with fellow hackers is part of our job. Most of us white hats dabble in a little curiosities from time to time, and you're typically just surrounded by more people skilled enough technically to raise the risks for you a bit. Below is a guide. the At-home Windows Hardening Security Project that I created to help you harden your Windows 10/11 system but not make it so secure that it is unusable. Disable Remote Access Attackers can use Microsoft Remote Desktop's remote access feature to gain control of your device, steal information, and install malware. You'll want to be able to launch R emote D esktop C onnection to log into various things (including the lab here), but you do not wish to host a remote desktop service. The easiest graphical way to disable Remote Desktop is by using Settings. Start by launching Settings using Windows+i. From the left sidebar, select "System." On the right pane, scroll down and choose "Remote Desktop." On the following screen, turn off the "Remote Desktop" toggle. The Windows 11 Home edition doesn't support remote desktops. Use Antivirus Windows' Virus & threat protection is good enough. It is on by default. Go to Start, type in "Virus & Threat Protection," then go to "Manage settings." Make sure that all toggles are in the "on" position. If you do choose to handle malware on your computer, you will want to take note of the "Exclusions" and add exclusions to the folders you don't wish to scan. Create Strong Passwords Passwords should be in a password manager, and I don't care what anyone says; you should invest in a good one like LastPass. Always be careful who you're giving your data to and their financial situation. You should also purchase two YubiKeys, ensure the password manager's 2-factor authentication is enabled, and set up with your primary and backup YubiKey. Buy a YubiKey Nano to stick in your laptop and keep a YubiKey on your keyring. Share your master password with a loved one and make your password vault part of your digital inheritance if something should happen to you. I know I am bleeding into other subjects, but someone needs access to your digital identities if something were to happen to you. There is a line of cybersecurity that is too secure for no one to access anything, and that isn't where you need to draw the line. It's something you need to consider seriously. You'll already be maintaining your digital life. Enable File Backups Regular file backup can help prevent data loss during malware attacks or hardware failures. Go back to Start, then "Virus & Threat Protection," scroll down to "Ransomware protection," click the option to "Set up OneDrive," and follow the prompt to choose which folders to back up. Turn on Core Isolation This feature adds virtualization-based security to protect against malicious code and hackers. It isolates core processes in memory and prevents hackers from taking control of unsecured drivers.  To turn on core isolation in Windows 11, do the following: Click the Start button Type "Windows Security" Select Device security Select Core isolation details  Turn on: Local Security Authority protection Microsoft Vulnerable Driver Blocklist Turn on Bitlocker Drive Encryption If you have Windows 11 Pro, go ahead and set up Bitlocker Drive Encryption. That way, when your computer starts up, you will be prompted with a password, which will encrypt your data at rest. Optional PUA protection I've never turned this on, and it may be an annoyance as we tend to play with many applications, but you do have the ability to turn on "Reputation-based protection," which will protect you from potentially unwanted applications. Windows Update Settings Go to Windows Update Settings and ensure "Get the latest updates as soon as they are available" is OFF. Even with this setting off, you will still receive important security updates automatically to protect your device.  Then click on "Advanced Options" and turn on "Receive updates for other Microsoft Products." That should do it. Make sure you stay updated with Windows updates and use your password manager. Also, make sure you turn on 2-factor authentication everywhere! Tyler Wall is the founder of Cyber NOW Education. He holds bills for a Master of Science from Purdue University and CISSP, CCSK, CFSR, CEH, Sec+, Net+, and A+ certifications. He mastered the SOC after having held every position from analyst to architect and is the author of three books, 100+ professional articles, and ten online courses specifically for SOC analysts. You can connect with him on LinkedIn . You can sign up for a Lifetime Membership  of Cyber NOW® with a special deal for 15% off with coupon code "KB15OFF" which includes all courses, certification, the cyber range, the hacking lab, webinars, the extensive knowledge base, forums, and spotlight eligibility, to name a few benefits. Download the Azure Security Labs  eBook from the Secure Style Store. These labs walk you through several hands-on fun labs in Microsoft Azure, leaving you with the know-how to create a gig in Fiverr or Upwork to start your cybersecurity freelancing. Some of our free resources include the  Forums , the Knowledge Base , our True Entry Level SOC Analyst Jobs , Job Hunting Application Tracker , Resume Template ,  and Weekly Networking Checklist . Ensure you create an account or enter your email to stay informed of our free giveaways and promos, which we often offer. Check out my latest book,  Jump-start Your SOC Analyst Career: A Roadmap to Cybersecurity Success,  2nd edition, published June 1st, 2024, and winner of the 2024 Cybersecurity Excellence Awards and a finalist in the Best Book Awards. If you enjoy audiobooks, I suggest the Audible version, but you can also get it in beautiful paperback, kindle, or PDF versions. The downloadable PDF version can be grabbed here .

  • Two Part Serverless Lab - Part One

    Two Part Serverless Lab - Part One We will kick this serverless lab off with an introduction to serverless computing. Serverless computing is a way of handling backend services as needed. Instead of worrying about the technical details of the underlying infrastructure, a serverless provider lets users write and deploy code without the hassle. With this approach, a company using serverless is billed based on their actual usage, avoiding the need to reserve and pay for a fixed amount of resources. Even though physical servers are still in play, developers don't need to think about them. In the early days of the internet, building a web application required owning the physical hardware to run a server. This was both cumbersome and costly. Then came cloud computing, where remote servers or server space could be rented. However, developers often over-purchased to prevent traffic spikes from breaking their applications. Even with auto-scaling, unforeseen events like a DDoS attack could lead to high costs. This is Two Part Serverless Labs - Part One. What are Backend Services? Application development is divided into frontend and backend. The frontend is what users see and interact with, while the backend includes servers and databases handling application files and data. For instance, in a concert ticket website, when a user searches for an artist, the frontend sends a request to the backend, which retrieves the data from a database and sends it back to the frontend. Benefits of Serverless Serverless computing lets developers buy backend services on a flexible, pay-as-you-go basis. It's like switching from a monthly fixed data plan to paying only for the data used. Despite the name "serverless," servers are still involved, but all the management is handled by the vendor. Developers can focus on their work without dealing with server concerns. Advantages of Serverless Computing The advantages of serverless computer are threefold in nature. The provide for lower costs since you are only paying for what is used. It provides for simplified scalability because serverless vendors handle the scaling on demand. Lastly it provides for quicker turnaround. The serverless architecture speeds up development and deployment. Comparison with Other Cloud Backend Models In comparison with Platform-as-a-Service (PaaS) , PaaS provides tools for development but isn't as easily scalable and may have startup delays. In comparison with Infrastructure-as-a-Service (IaaS), IaaS involves hosting infrastructure but doesn't necessarily mean serverless functionality. Difference between Serverless and Containers One last thing that was confusing to me and may be to you is the difference between containers and Serverless.  Both serverless computing and containers enable developers to build applications with far less overhead and more flexibility than applications hosted on traditional servers or virtual machines. Serverless applications are more scalable and usually more cost-effective  since they only run when needed and are more lightweight.  You can copy and paste code into the Cloud Service Provider and it will handle everything required to run that code.  Given that it is supported. Modules, libraries, and dependencies in a Serverless instance are already installed and maintained by the Cloud Service Provider and ready to be used by your code.   A container 'contains' both an application and all the elements the application needs to run properly, including system libraries, system settings, and other dependencies.   Containers are a heavier package as it comes with everything that it needs to run. Containers that need other containers are orchestrated to run together and that’s what Kubernetes does. Drawbacks of Serverless Computing Serverless computing is getting better as providers find solutions to improve its drawbacks. One of these drawbacks is called "cold starts." Here's how it works: When a particular serverless function hasn't been used for a while, the provider turns it off to save energy. When a user runs an application that needs that function, the provider has to start it up again, causing a delay known as a "cold start." Once the function is up and running, it responds much faster to subsequent requests (called "warm starts"). However, if the function isn't used for a while, it goes dormant again. This means the next user asking for that function will experience another cold start. Serverless Vendors When it comes to serverless computing there isn’t one giant cloud provider in the market, there are three: Amazon, Microsoft, and Google . Between them the triplet of US west-coast behemoths control more than half of the serverless computing market , with smaller players like IBM and Alibaba capturing the largest slices of what is left over. While serverless computing and Infrastructure as a Service (IaaS) technologies are sometimes assumed to be almost commoditized these days with differences coming down to little more than price. The reality, though, is that there are, indeed, some important points of difference between the Amazon, Microsoft, and Google offers, and depending on your project and the use case you are addressing, there may well be a best option for you. When it comes to assessing the big three players, there are some pretty clear preferences on the part of industry analysts. Leading research and advisory firm Gartner, for one, puts Amazon out ahead of Microsoft and Google.  Their annual Magic Quadrant for Cloud Infrastructure as a Service  clearly recognizes that AWS, Azure, and GCP are leaders, but it also clearly establishes the superior offer that AWS delivers. As Gartner explains, Amazon has the most mature serverless offer and serves the greatest diversity of customers. Though there are a few words of caution about AWS with regards to pricing, focus, and the other activities of parent company Amazon impacting whether or not a client would want to deploy on their serverless infrastructure ( Amazon.com competitor WalMart, for example) the industry view on AWS is overwhelmingly positive. Microsoft finds itself well behind AWS, according to Gartner, but it is still well ahead of Google and the other niche actors (Alibaba, Oracle, and IBM). Significantly, for Gartner one of the core strengths of the Azure offer is their capacity to serve the IoT market. For smart, connected, and networked IoT devices, Azure as a Cloud provider could be a good choice, and perhaps even a superior one to the AWS offer. However, there are two caveats: first, as Azure is still growing there are occasionally stability and downtime issues that AWS does not seem to suffer, and second, the level of technical support for development teams is not always the best. As the third member of the industry’s big three serverless offerings, Google Cloud Platform is a leader thanks to its scale, but not thanks to its performance. Gartner explains that the company has “an immaturity of process and procedures” and is “difficult to transact with at times”. What’s more, they note that the limited number and expertise of partners doesn’t inspire a lot of confidence among some enterprise customers, though they also note that GCP is often a preferred choice for startups and scaleups . On the bright side, Google seems to have containers right and also Gartner notes that Google has “differentiated technologies on the forward edge of IT, specifically in analytics and machine learning” and that this has encouraged machine learning and AI-focused firms to shift to Google in some cases. For Gartner and, to be sure, for most in industry, the leadership of AWS as Cloud provider is undisputed. The market backs this up, too, because despite the rise of Azure and the advances made by Google, Amazon remains in front – and by a long way. In the next section we will walk you through deploying a simple “hello world” AWS Lambda function to get you familiar with how Serverless works.   Part Two Tyler Wall is the founder of Cyber NOW Education. He holds bills for a Master of Science from Purdue University and also CISSP, CCSK, CFSR, CEH, Sec+, Net+, and A+ certifications. He mastered the SOC after having held every position from analyst to architect and is the author of three books, 100+ professional articles, four online courses, and regularly holds webinars for new cybersecurity talent. You can connect with him on LinkedIn . To view my dozens of courses, visit my homepage  and watch the trailers! Become a Black Badge member of Cyber NOW® and enjoy all-access for life. Check out my latest book, Jump-start Your SOC Analyst Career: A Roadmap to Cybersecurity Success , winner of the 2024 Cybersecurity Excellence Awards.

  • Two Part Serverless Lab - Part Two

    Two Part Serverless Lab - Part Two You're going to want to have hands-on experience with both Azure and AWS as by far the two biggest players in Cloud computing. Our course Cloud Security NOW! covers getting hands-on with both of these platforms. We are going to work our way hands-on in this serverless lab part two AWS, like Azure, offers you a free tier for signing up. Go ahead and get signed up with AWS. Now that that's out of the way, let's get hands-on in your first AWS Cybersecurity Lab using AWS Lambda to execute a function for serverless computing. Create a Lambda function with the console In this example, your function takes a JSON object that contains two integer values labeled "length" and "width". The function multiplies these values to calculate an area and returns this as a JSON string. Your function also prints the calculated area, along with the name of its CloudWatch log group. To create your function, you first use the console to create a basic Hello World function. Then you add your own function code. To create a Hello World Lambda function with the console Open the Functions page of the Lambda console. Choose Create function. Select Author from scratch. In the Basic information pane, for Function name enter myLambdaFunction . For Runtime, choose Python 3.12 Leave architecture set to x86_64 and choose Create function. Lambda creates a function that returns the message Hello from Lambda! Lambda also creates an execution role for your function. An execution role is an AWS Identity and Access Management (IAM) role that grants a Lambda function permission to access AWS services and resources. For your function, the role that Lambda creates grants basic permissions to write to CloudWatch Logs. You now use the console's built-in code editor to replace the Hello world code that Lambda created with your own function code. Choose the Code tab. In the console's built-in code editor, you should see the function code that Lambda created. If you don't see the lambda_function.py tab in the code editor, select lambda_function.py in the file explorer as shown on the following diagram. Paste the following code into the lambda_function.py tab, replacing the code that Lambda created. import json import logging logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): # Get the length and width parameters from the event object. # The runtime converts the event object to a Python dictionary length=event['length'] width=event['width'] area = calculate_area(length, width) print(f"The area is {area}") logger.info(f"CloudWatch logs group: {context.log_group_name}") # return the calculated area as a JSON string data = {"area": area} return json.dumps(data) def calculate_area(length, width): return length*width Select Deploy to update your function's code. When Lambda has deployed the changes, the console displays a banner letting you know that it's successfully updated your function. Understanding your function code Before you move to the next step, let's take a moment to look at the function code and understand some key Lambda concepts. The Lambda handler: Your Lambda function contains a Python function named lambda_handler. A Lambda function in Python can contain more than one Python function, but the handler function is always the entry point to your code. When your function is invoked, Lambda runs this method. When you created your Hello world function using the console, Lambda automatically set the name of the handler method for your function to lambda_handler. Be sure not to edit the name of this Python function. If you do, Lambda won’t be able to run your code when you invoke your function. The Lambda event object: The function lambda_handler takes two arguments, event and context. An event in Lambda is a JSON formatted document that contains data for your function to process.If your function is invoked by another AWS service, the event object contains information about the event that caused the invocation. For example, if an Amazon Simple Storage Service (Amazon S3) bucket invokes your function when an object is uploaded, the event will contain the name of the Amazon S3 bucket and the object key. In this example, you’ll create an event in the console by entering a JSON formatted document with two key-value pairs. The Lambda context object: The second argument your function takes is context. Lambda passes the context object to your function automatically. The context object contains information about the function invocation and execution environment. You can use the context object to output information about your function's invocation for monitoring purposes. In this example, your function uses the log_group_name parameter to output the name of its CloudWatch log group. Logging in Lambda: With Python, you can use either a print statement or a Python logging library to send information to your function's log. To illustrate the difference in what's captured, the example code uses both methods. In a production application, we recommend that you use a logging library. Invoke the Lambda function using the console To invoke your function using the Lambda console, you first create a test event to send to your function. The event is a JSON formatted document containing two key-value pairs with the keys "length" and "width". To create the test event In the Code source pane, choose Test. Select Create new event. For Event name enter myTestEvent . In the Event JSON panel, replace the default values by pasting in the following: { "length": 6, "width": 7 } Choose Save. You now test your function and use the Lambda console and CloudWatch Logs to view records of your function’s invocation. To test your function and view invocation records in the console In the Code source pane, choose Test. When your function finishes running, you’ll see the response and function logs displayed in the Execution results tab. In this example, you invoked your code using the console's test feature. This means that you can view your function's execution results directly in the console. When your function is invoked outside the console, you need to use CloudWatch Logs. To view your function's invocation records in CloudWatch Logs Open the Log groups page of the CloudWatch console. Choose the log group for your function (/aws/lambda/myLambdaFunction). This is the log group name that your function printed to the console. In the Log streams tab, choose the log stream for your function's invocation. When you're finished working with the example function, delete it. You can also delete the log group that stores the function's logs, and the execution role that the console created. To delete a Lambda function Open the Functions page of the Lambda console. Choose a function. Choose Actions, Delete. In the Delete function dialog box, enter delete , and then choose Delete. To delete the log group Open the Log groups page of the CloudWatch console. Select the function's log group (/aws/lambda/my-function). Choose Actions, Delete log group(s). In the Delete log group(s) dialog box, choose Delete. To delete the execution role Open the Roles page of the AWS Identity and Access Management (IAM) console. Select the function's execution role (for example, myLambdaFunction-role- 31exxmpl ). Choose Delete. In the Delete role dialog box, enter the role name and then choose Delete. Tyler Wall is the founder of Cyber NOW Education. He holds bills for a Master of Science from Purdue University and also CISSP, CCSK, CFSR, CEH, Sec+, Net+, and A+ certifications. He mastered the SOC after having held every position from analyst to architect and is the author of three books, 100+ professional articles, four online courses, and regularly holds webinars for new cybersecurity talent. You can connect with him on LinkedIn . To view my dozens of courses, visit my homepage  and watch the trailers! Become a Black Badge member of Cyber NOW® and enjoy all-access for life. Check out my latest book, Jump-start Your SOC Analyst Career: A Roadmap to Cybersecurity Success , winner of the 2024 Cybersecurity Excellence Awards.

  • How to Get Started in IT Without Prior Experience

    Breaking into the IT industry can seem daunting, especially without any previous experience. Many aspiring professionals often feel overwhelmed by the fast-paced environment and the vast array of technical skills required. Fortunately, it's possible to get your foot in the door and start building a rewarding career in information technology, even if you're starting from scratch. This post will guide you through practical steps to launch your career in IT. A modern office workspace showcasing technology essential for IT. Entry-Level IT: Understanding the Landscape Before you dive into the nitty-gritty of job hunting, it's essential to understand what the entry-level IT landscape looks like. Entry-level IT jobs can include roles such as IT technician, helpdesk support, network administrator, and cybersecurity analyst. Although these roles have different responsibilities, they often share certain foundational skills and knowledge. According to recent studies, the global IT job market is expanding rapidly, with thousands of new roles being created every year. Demand for IT professionals is expected to grow by 11% from 2020 to 2030—that's faster than the average for all occupations. This growth translates into numerous opportunities for those willing to learn. Skills You Need to Succeed While most entry-level roles don’t require extensive prior experience, certain skills can significantly enhance your job prospects. Here are some key skills to consider: Technical Skills : Familiarize yourself with basic IT concepts, operating systems (Windows, Linux), networking fundamentals, and cybersecurity basics. Online courses can help you grasp these essential skills. Problem-Solving Abilities : IT professionals often troubleshoot issues, requiring patience and creative thinking. Analyze problems and develop strategies for resolving them. Communication Skills : IT isn’t just about technology—it’s also about conveying information effectively. You'll need to explain technical details to non-technical personnel clearly. Certifications : Consider getting certified. Entry-level certifications like CompTIA A+, Network+, and Security+ provide you with a significant edge in the job market. How to Gain Experience Finding ways to gain experience without a formal job can seem contradictory. Here are several practical ways to build your portfolio: Volunteer : Many nonprofits and local community organizations need IT assistance. Volunteering not only helps you build skills but also enhances your resume. Internships : Seek out internships, even unpaid ones. They offer real-life experience and can often lead to job offers. Online Projects : Consider contributing to open-source projects or developing your own tech projects. Websites like GitHub allow you to showcase your work to potential employers. Networking : Connect with professionals in the field via LinkedIn to learn about job opportunities. Join online forums or local meetups focusing on IT careers. A computer screen displaying coding essentials for entry-level IT roles. Is 30 Too Old to Start Cyber Security? A common myth is that starting a career in IT, particularly cybersecurity, at the age of 30 is too late. In reality, many professionals have successfully transitioned into the IT field beyond their 30s. The primary factor to consider is your willingness to learn and adapt. You can leverage the skills and experiences you've gained in other fields, such as management, healthcare, or education. Many IT roles value diverse experiences and perspectives. For example, a career in management equips you with essential skills in team collaboration and project management, both critical in IT environments. Moreover, the abundance of online learning resources means that age is less of a barrier than ever. Platforms like Coursera and edX offer numerous courses tailored for adults seeking new careers. Resources to Kickstart Your IT Career With the right resources at your fingertips, you can accelerate your journey into the IT world. Here are some recommended platforms and tools: Online Learning Platforms : Websites like Udacity or Pluralsight offer courses specifically designed to help beginners build their IT skills. YouTube Tutorials : Free video resources provide a hands-on approach to learning. Channels dedicated to IT tutorials can help with both foundational and advanced topics. Blogs and Forums : Engage with communities on websites like Reddit or Stack Overflow, where you can ask questions and exchange knowledge. Simulation Tools : Programs like Cisco Packet Tracer or GNS3 allow you to practice networking skills virtually without expensive equipment. A laptop on a desk with notebooks, symbolizing learning in IT. Applying for Jobs Once you acquire the necessary skills and experience, it's time to apply for jobs. Here are strategies to improve your chances: Tailor Your Resume : Customizing your resume for each position helps highlight your most relevant skills and experiences. Prepare for Interviews : Familiarize yourself with common interview questions related to IT roles. Practice articulating your problem-solving process clearly. Leverage LinkedIn : Networking is key. A well-optimized LinkedIn profile can attract recruiters looking for entry-level candidates. Be Open to Learning : Employers appreciate candidates who show eagerness to learn and adapt. Be ready to express your willingness to take on new responsibilities. Building a Sustainable Career Success in IT isn't just about landing a job; it’s about sustaining and growing your career over time. Consider these tips to remain relevant: Continuous Learning : The tech industry evolves rapidly. Commit to ongoing education through courses, webinars, and industry conferences. Get Certified : As you gain experience, aim for advanced certifications that align with your career goals. This could significantly boost your employability and salary potential. Join Professional Organizations : Being part of organizations like CompTIA or ISACA can provide resources, mentorship opportunities, and networking possibilities. Seek Feedback and Mentorship : Regularly ask colleagues for feedback and look for mentors who can guide you through your career trajectory. With persistence and determination, getting started in IT without prior experience is entirely possible. The industry is welcoming and filled with opportunities for individuals ready to put in the effort. Your Journey Awaits Embarking on a career in IT might seem overwhelming at first, especially if you're starting fresh. However, the landscape is rich with resources, training opportunities, and a welcoming community eager to support newcomers. Whether you aim to enter cybersecurity no experience or explore other IT fields, the path to success involves continuous learning, practical experience, and networking. Take the first step today, and you may find yourself thriving in an industry that is not only rewarding but also offers incredible possibilities for growth and development. Your future in IT awaits!

  • Azure Infrastructure as Code - Part Six

    Azure Infrastructure as Code - Part Six Are you ready to wrap this up? In Azure Infrastructure as Code - Part Six we are going to put everything together and generate a report that can be presented to small to medium sized businesses on their cloud security posture. First, we are going to be analyzing Terraform code with Checkov. So let's do that. Make Terraform Directory and Move There mkdir ~/wrappingup cd ~/wrappingup Create main.tf file with VS Code code main.tf Paste Code into File, and Save terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "3.90.0" } } } provider "azurerm" { # Configuration options features { } } variable "prefix" { default = "tpot" } resource "azurerm_resource_group" "tpot-rg" { name = "${var.prefix}-resources" location = "East US" } resource "azurerm_virtual_network" "main" { name = "${var.prefix}-network" address_space = ["10.0.0.0/16"] location = azurerm_resource_group.tpot-rg.location resource_group_name = azurerm_resource_group.tpot-rg.name } resource "azurerm_subnet" "internal" { name = "internal" resource_group_name = azurerm_resource_group.tpot-rg.name virtual_network_name = azurerm_virtual_network.main.name address_prefixes = ["10.0.2.0/24"] } resource "azurerm_virtual_machine" "main" { depends_on = [ azurerm_resource_group.tpot-rg ] name = "${var.prefix}-vm" location = azurerm_resource_group.tpot-rg.location resource_group_name = azurerm_resource_group.tpot-rg.name network_interface_ids = [azurerm_network_interface.tpot-vm-nic.id] vm_size = "Standard_A2m_v2" # Uncomment this line to delete the OS disk automatically when deleting the VM delete_os_disk_on_termination = true # Uncomment this line to delete the data disks automatically when deleting the VM delete_data_disks_on_termination = true storage_image_reference { publisher = "canonical" offer = "ubuntu-24_04-lts" sku = "minimal-gen1" version = "latest" } storage_os_disk { name = "tpot-disk" caching = "ReadWrite" create_option = "FromImage" managed_disk_type = "Standard_LRS" } os_profile { computer_name = "hostname" admin_username = "azureuser" admin_password = "CyberNOW!" } os_profile_linux_config { disable_password_authentication = false } } # Create Security Group to access linux resource "azurerm_network_security_group" "tpot-nsg" { depends_on=[azurerm_resource_group.tpot-rg] name = "linux-vm-nsg" location = azurerm_resource_group.tpot-rg.location resource_group_name = azurerm_resource_group.tpot-rg.name security_rule { name = "AllowALL" description = "AllowALL" priority = 100 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "*" source_address_prefix = "Internet" destination_address_prefix = "*" } security_rule { name = "AllowSSH" description = "Allow SSH" priority = 150 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "22" source_address_prefix = "Internet" destination_address_prefix = "*" } } # Associate the linux NSG with the subnet resource "azurerm_subnet_network_security_group_association" "tpot-vm-nsg-association" { depends_on=[azurerm_resource_group.tpot-rg] subnet_id = azurerm_subnet.internal.id network_security_group_id = azurerm_network_security_group.tpot-nsg.id } # Get a Static Public IP resource "azurerm_public_ip" "tpot-vm-ip" { depends_on=[azurerm_resource_group.tpot-rg] name = "tpot-vm-ip" location = azurerm_resource_group.tpot-rg.location resource_group_name = azurerm_resource_group.tpot-rg.name allocation_method = "Static" } # Create Network Card for linux VM resource "azurerm_network_interface" "tpot-vm-nic" { depends_on=[azurerm_resource_group.tpot-rg] name = "tpot-vm-nic" location = azurerm_resource_group.tpot-rg.location resource_group_name = azurerm_resource_group.tpot-rg.name ip_configuration { name = "internal" subnet_id = azurerm_subnet.internal.id private_ip_address_allocation = "Dynamic" public_ip_address_id = azurerm_public_ip.tpot-vm-ip.id } } output "public_ip" { value = azurerm_public_ip.tpot-vm-ip.ip_address } Format the file terraform fmt Execute Checkov Make sure you're in the directory that your Terraform is in. checkov -f main.tf Results We have seven failed checks. Looking through the list it is warning us for stuff that we have configured specifically like ports that are exposed to the public internet. Since this is the honeypot that we just configured in Azure Cybersecurity Labs - Part Four, we know that this works and we know that this is how it needs to be configured to work properly. So let's go ahead and deploy this to Azure. Type az login in the terminal to establish your credentials if they aren't cached already. az login Initialize the directory terraform init Now terraform plan terraform plan Note:  Take a look at the Terraform Plan and see the 8 resources that we are creating. While not mandatory, it's good practice to 'Terraform Plan' to review your changes BEFORE deploying. Now terraform apply terraform apply Make sure you have previously deleted this project from Azure so that you can deploy it again. Prowler Now we're getting into to new stuff. Prowler is an Open Source security tool to perform AWS, Azure, Google Cloud and Kubernetes security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness, and also remediations! We have Prowler CLI (Command Line Interface) that we call Prowler Open Source. You can install Prowler using Pip3 like we did with Checkov in Azure Cybersecurity Labs - Part Five. So let's do that. pip3 install prowler and then we run Prowler prowler azure --az-cli-auth The results are displayed on your screen and also exported to your 'output directory' I like view the HTML file and the use a html to jpg or html to pdf converter online. Our environment is a new environment so it doesn't have much on here other than turning Microsoft Defender on for our resources which we do not currently have deployed. Using Prowler is very simple and the value that you are adding as a freelancer is discerning the results and narrowing it down for the business to what is useful and actionable to them. Do not just give them this report and be done with it. They will be unhappy. Instead write specific recommendations in your own report with your own template with step-by-step instructions on how to fix each issue that is important to them. And that wraps up the Azure Cybersecurity Labs series but stick around for one BONUS as we discuss Serverless computing. Tyler Wall is the founder of Cyber NOW Education. He holds bills for a Master of Science from Purdue University and also CISSP, CCSK, CFSR, CEH, Sec+, Net+, and A+ certifications. He mastered the SOC after having held every position from analyst to architect and is the author of three books, 100+ professional articles, four online courses, and regularly holds webinars for new cybersecurity talent. You can connect with him on LinkedIn . To view my dozens of courses, visit my homepage and watch the trailers! Become a Black Badge member of Cyber NOW® and enjoy all-access for life. Check out my latest book, Jump-start Your SOC Analyst Career: A Roadmap to Cybersecurity Success , winner of the 2024 Cybersecurity Excellence Awards.

  • Azure Infrastructure as Code - Part Five

    Azure Infrastructure as Code - Part Five Next up is Azure Infrastructure as Code - Part Five. Checkov is a static code analysis tool for scanning infrastructure as code (IaC) files for misconfigurations that may lead to security or compliance problems. Checkov includes more than 750 predefined policies to check for common misconfiguration issues. Checkov also supports the creation and contribution of custom policies . Supported IaC types Checkov scans these IaC file types: Terraform (for AWS, GCP, Azure and OCI) CloudFormation (including AWS SAM) Azure Resource Manager (ARM) Serverless framework Helm charts Kubernetes Docker This lab shows how to install Checkov, run a scan, and analyze the results. Install Pip3 and Python pip3 is the official package manager and pip command for Python 3. It enables the installation and management of third party software packages with features and functionality not found in the Python standard library. Pip3 installs packages from PyPI (Python Package Index). You can get it by installing the latest version of python here. Install Checkov From PyPI Using Pip pip3 install checkov Make Terraform Directory and Move There mkdir ~/checkov-example cd ~/checkov-example Create main.tf file with VS Code code main.tf Paste Code into File, Save, then Exit resource "aws_s3_bucket" "foo-bucket" { # same resource configuration as previous example, but acl set for public access. acl = "public-read" } data "aws_caller_identity" "current" {} Format the file terraform fmt Execute Checkov Make sure you're in the directory that your Terraform is in. checkov -f main.tf Results It's that simple. As you can see Checkov runs and it notes that there were 8 failed checks including Public read access enabled. If you click on the link it will take you to a guide which explains the failure in more details and teaches you how to fix it. Checkov checks for all common configuration and security errors in your Terraform code BEFORE deploying it. Anytime you download a Terraform script to execute in your environment, you will want to run Checkov to make sure that it meets your standards for configuration. In the next blog, wrapping up this series, we will be checking a Terraform configuration file for issues with Checkov, deploying it to Azure, and using the open source tool Prowler to perform a security best practices assessment of your Azure environment. The report generated can be used to present to small and medium sized businesses with your recommendations for remediation. You will now be able to create a gig on Fiverr or Upwork or the likes and conduct low-cost cloud security assessments and remember to continue your education to pass the Terraform Associate exam. Tyler Wall is the founder of Cyber NOW Education. He holds bills for a Master of Science from Purdue University and also CISSP, CCSK, CFSR, CEH, Sec+, Net+, and A+ certifications. He mastered the SOC after having held every position from analyst to architect and is the author of three books, 100+ professional articles, four online courses, and regularly holds webinars for new cybersecurity talent. You can connect with him on LinkedIn . To view my dozens of courses, visit my homepage  and watch the trailers! Become a Black Badge member of Cyber NOW® and enjoy all-access for life. Check out my latest book, Jump-start Your SOC Analyst Career: A Roadmap to Cybersecurity Success , winner of the 2024 Cybersecurity Excellence Awards.

  • Azure Infrastructure as Code - Part Four

    Azure Infrastructure as Code - Part Four Let's get started on Azure Infrastructure as Code - Part Four. In this lab we are going to continue our Terraform exercises by deploying a honeypot via Terraform. If you have been following along, previously on this blog I had you installed a T-Pot manually using the GUI in Azure. There's a much easier way to do this, so let's get rollin'. Create the Terraform Configuration File First, in the terminal on Mac we will issue the following commands to create a directory that will contain our Terraform configuration: mkdir  ~/tpot cd  ~/tpot And open up a file for main.tf code main .tf On Windows create a folder anywhere called "tpot" and create a new file called "main" with the file extension ".tf" and open that file with Visual Studio Code Now we need to write configuration to create a few new resources. Copy and paste the code snippet into the "main.tf" file terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "3.90.0" } } } provider "azurerm" { # Configuration options features { } } variable "prefix" { default = "tpot" } resource "azurerm_resource_group" "tpot-rg" { name = "${var.prefix}-resources" location = "East US" } resource "azurerm_virtual_network" "main" { name = "${var.prefix}-network" address_space = ["10.0.0.0/16"] location = azurerm_resource_group.tpot-rg.location resource_group_name = azurerm_resource_group.tpot-rg.name } resource "azurerm_subnet" "internal" { name = "internal" resource_group_name = azurerm_resource_group.tpot-rg.name virtual_network_name = azurerm_virtual_network.main.name address_prefixes = ["10.0.2.0/24"] } resource "azurerm_virtual_machine" "main" { depends_on = [ azurerm_resource_group.tpot-rg ] name = "${var.prefix}-vm" location = azurerm_resource_group.tpot-rg.location resource_group_name = azurerm_resource_group.tpot-rg.name network_interface_ids = [azurerm_network_interface.tpot-vm-nic.id] vm_size = "Standard_A2m_v2" # Uncomment this line to delete the OS disk automatically when deleting the VM delete_os_disk_on_termination = true # Uncomment this line to delete the data disks automatically when deleting the VM delete_data_disks_on_termination = true storage_image_reference { publisher = "canonical" offer = "ubuntu-24_04-lts" sku = "minimal-gen1" version = "latest" } storage_os_disk { name = "tpot-disk" caching = "ReadWrite" create_option = "FromImage" managed_disk_type = "Standard_LRS" } os_profile { computer_name = "hostname" admin_username = "azureuser" admin_password = "CyberNOW!" } os_profile_linux_config { disable_password_authentication = false } } # Create Security Group to access linux resource "azurerm_network_security_group" "tpot-nsg" { depends_on=[azurerm_resource_group.tpot-rg] name = "linux-vm-nsg" location = azurerm_resource_group.tpot-rg.location resource_group_name = azurerm_resource_group.tpot-rg.name security_rule { name = "AllowALL" description = "AllowALL" priority = 100 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "*" source_address_prefix = "Internet" destination_address_prefix = "*" } security_rule { name = "AllowSSH" description = "Allow SSH" priority = 150 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "22" source_address_prefix = "Internet" destination_address_prefix = "*" } } # Associate the linux NSG with the subnet resource "azurerm_subnet_network_security_group_association" "tpot-vm-nsg-association" { depends_on=[azurerm_resource_group.tpot-rg] subnet_id = azurerm_subnet.internal.id network_security_group_id = azurerm_network_security_group.tpot-nsg.id } # Get a Static Public IP resource "azurerm_public_ip" "tpot-vm-ip" { depends_on=[azurerm_resource_group.tpot-rg] name = "tpot-vm-ip" location = azurerm_resource_group.tpot-rg.location resource_group_name = azurerm_resource_group.tpot-rg.name allocation_method = "Static" } # Create Network Card for linux VM resource "azurerm_network_interface" "tpot-vm-nic" { depends_on=[azurerm_resource_group.tpot-rg] name = "tpot-vm-nic" location = azurerm_resource_group.tpot-rg.location resource_group_name = azurerm_resource_group.tpot-rg.name ip_configuration { name = "internal" subnet_id = azurerm_subnet.internal.id private_ip_address_allocation = "Dynamic" public_ip_address_id = azurerm_public_ip.tpot-vm-ip.id } } output "public_ip" { value = azurerm_public_ip.tpot-vm-ip.ip_address } Something I'm just going to note here because it's difficult information to find, is if you want to find the SKU of a particular image you can search for it like this syntax: az vm image list --publisher Canonical --sku gen1 --output table --all  Type az login in the terminal to establish your credentials az login Initialize the directory terraform init Now terraform plan terraform plan Note: Take a look at the Terraform Plan and see the 8 resources that we are creating. While not mandatory, it's good practice to 'Terraform Plan' to review your changes BEFORE deploying. Now terraform apply terraform apply It will output the public IP address. Just SSH into it with the credentials (ssh azureuser@) Username: azureuser Password: CyberNOW! And install the honeypot env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/master/install.sh)" Select "Hive" install sudo reboot (when finished) Note: The installation script changes the port to SSH on, so if you want to ssh to it you have to use this syntax "ssh azureuser@ -p 64295" You can now login to the honeypot web interface via https://:64297 See how much easier this is than configuring it manually? This blog series won't go into detail about how to create a Terraform for scratch, but at this point you understand the basic Terraform lifecycle and understand its application and what its used for. I recommend now picking up a Udemy course on the Terraform Associate exam and spend the next couple of days studying for the exam. The Terraform Associate exam itself isn't very costly, and makes great wall art. When you are finished with the Tpot, make sure you aren't charged anything further and use the "terraform destroy” command to remove everything you did in one swoop. Easy peasy. Join us next in this series as conduct automated scans of Terraform files for configuration issues using the open source tool Checkov. Tyler Wall is the founder of Cyber NOW Education. He holds bills for a Master of Science from Purdue University and also CISSP, CCSK, CFSR, CEH, Sec+, Net+, and A+ certifications. He mastered the SOC after having held every position from analyst to architect and is the author of three books, 100+ professional articles, four online courses, and regularly holds webinars for new cybersecurity talent. You can connect with him on LinkedIn . To view my dozens of courses, visit my homepage  and watch the trailers! Become a Black Badge member of Cyber NOW® and enjoy all-access for life. Check out my latest book, Jump-start Your SOC Analyst Career: A Roadmap to Cybersecurity Success , winner of the 2024 Cybersecurity Excellence Awards.

  • Azure Infrastructure as Code - Part Three

    Azure Infrastructure as Code - Part Three To kick Azure Infrastructure as Code - Part Three off, we first need to install Terraform and then we will continue with completing our very first Terraform lifecycle. Follow along in these two videos as we install Terraform to both a Mac and Windows then proceed with the instructions. Installing Terraform to Windows https://youtu.be/1er-WkfUBmU curl.exe -O https://releases.hashicorp.com/terraform/0.12.26/terraform_0.12.26_windows_amd64.zip Expand-Archive terraform_0.12.26_windows_amd64.zip Rename-Item -path .\terraform_0.12.26_windows_amd64\ .\terraform Installing Terraform to Mac https://youtu.be/VRzelcTMBkI brew install terraform terraform -install-autocomplete Running your first Terraform With Terraform there is a lifecycle for a resource and it can be broken down into four phases: Init, Plan, Apply, and Destroy. init — Init. Initialize the (local) Terraform environment. Usually executed only once per session. plan — Plan. Compare the Terraform state with the as-is state in the cloud, build and display an execution plan. This does not change the deployment (read-only). apply — Apply the plan from the plan phase. This potentially changes the deployment (read and write). destroy — Destroy all resources that are governed by this specific terraform environment. This article assumes that you have created an Azure account and subscription. The first thing we will do is install the Azure CLI tools and configure it to be used with terraform. The Azure CLI Tool installed Install the Azure CLI tool with brew in MacOSX: brew update  && brew install azure-cli T o install the Azure CLI using PowerShell in Windows, start PowerShell as administrator and run the following command: $ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://aka.ms/installazurecliwindows -OutFile .\AzureCLI.msi; Start-Process msiexec.exe -Wait -ArgumentList '/I AzureCLI.msi /quiet'; Remove-Item .\AzureCLI.msi You can now run the Azure CLI with the az command from either Windows Command Prompt, PowerShell, or Mac Terminal. You will use the Azure CLI tool to authenticate with Azure. Terraform must authenticate to Azure to create infrastructure. In your terminal, use the Azure CLI tool to set up your account permissions locally. az login You now have logged in using your account you created in previous lectures. In the output in the terminal, find the ID of the subscription that you want to use: {       "cloudName" :  "AzureCloud" ,     "homeTenantId" :  "0envbwi39-home-Tenant-Id" ,     "id" :  "35akss-subscription-id" ,      "isDefault" :  true ,     "managedByTenants" : [],     "name" :  "Subscription-Name" ,     "state" :  "Enabled" ,     "tenantId" :  "0envbwi39-TenantId" ,     "user" :  {     "name" :  "your-username@domain.com" ,       "type" :  "user" } } Once you have chosen the account subscription ID, set the account with the Azure CLI. az account set  --subscription "35akss-subscription-id" Next, we create a Service Principal. A Service Principal is an application within Azure Active Directory with the authentication tokens Terraform needs to perform actions on your behalf. Update the with the subscription ID you specified in the previous step. az ad sp create-for-rbac --role= "Contributor"  --scopes= "/subscriptions/ The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see the assignment details {   "appId" :  "xxxxxx-xxx-xxxx-xxxx-xxxxxxxxxx" ,   "displayName" :  "azure-cli-2022-xxxx" ,   "password" :  "xxxxxx~xxxxxx~xxxxx" ,   "tenant" :  "xxxxx-xxxx-xxxxx-xxxx-xxxxx" } Next you need to set your environment variables. HashiCorp recommends setting these values as environment variables rather than saving them in your Terraform configuration. Open a Mac terminal or PowerShell and input the values that were outputted from the previous command. Subscription ID we got from the previous step. For Mac Terminal export  ARM_CLIENT_ID= ""   export  ARM_CLIENT_SECRET= ""   export  ARM_SUBSCRIPTION_ID= ""   export  ARM_TENANT_ID= "" For PowerShell $env: ARM_CLIENT_ID = "APPID_VALUE" $env: ARM_CLIENT_SECRET = "PASSWORD_VALUE" $env: ARM_TENANT_ID = "TENANT_VALUE" $env: ARM_SUBSCRIPTION_ID = "SUBSCRIPTION_ID" Install Visual Studio Code and Setup Environment Great! We are all configured to use Azure now. Now the next thing we are going to do is open up a terminal install Visual Studio Code by issuing this command on a Mac: brew install visual-studio-code Or on a Windows machine navigating to this URL to download . Next, in the terminal on Mac we will issue the following commands to create a directory that will contain our Terraform configuration: mkdir  ~/tf-exercise-1 cd  ~/tf-exercise-1 And open up a file for main.tf code main .tf On Windows create a folder anywhere called "tf-exercise-1" and create a new file called "main" with the file extension ".tf" and open that file with Visual Studio Code Now we need to write configuration to create a new resource group. Copy and paste the code snippet into the "main.tf" file # Configure the Azure provider terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 3.0.2" } } required_version = ">= 1.1.0" } provider "azurerm" { features {} } resource "azurerm_resource_group" "rg" { name = "myTFResourceGroup" location = "westus2" } Note: The location of your resource group is hardcoded in this example. If you do not have access to the resource group location westus2, update the main.tf file with your Azure region. This is a complete configuration that Terraform can apply. In the following sections we will review each block of the configuration in more detail. Terraform Block The terraform {} block contains Terraform settings, including the required providers Terraform will use to provision your infrastructure. For each provider, the source attribute defines an optional hostname, a namespace, and the provider type. Terraform installs providers from the Terraform Registry by default. In this example configuration, the azurerm provider’s source is defined as hashicorp/azurerm, which is shorthand for registry.terraform.io/hashicorp/azurerm . You can also define a version constraint for each provider in the required_providers block. The version attribute is optional, but we recommend using it to enforce the provider version. Without it, Terraform will always use the latest version of the provider, which may introduce breaking changes. Providers The provider block configures the specified provider, in this case azurerm. A provider is a plugin that Terraform uses to create and manage your resources. You can define multiple provider blocks in a Terraform configuration to manage resources from different providers. Resource Use resource blocks to define components of your infrastructure. A resource might be a physical component such as a server, or it can be a logical resource such as a Heroku application. Resource blocks have two strings before the block: the resource type and the resource name. In this example, the resource type is azurerm_resource_group and the name is rg. The prefix of the type maps to the name of the provider. In the example configuration, Terraform manages the azurerm_resource_group resource with the azurerm provider. Together, the resource type and resource name form a unique ID for the resource. For example, the ID for your network is azurerm_resource_group.rg. Resource blocks contain arguments which you use to configure the resource. The Azure provider documentation documents supported resources and their configuration options, including azurerm_resource_group and its supported arguments. Initialize your Terraform configuration Initialize your learn-terraform-azure directory in your terminal. The terraform commands will work with any operating system. Your output should look similar to this one: terraform init Initializing the backend...Initializing provider plugins... - Finding hashicorp/azurerm versions matching "~> 3.0.2" ... - Installing hashicorp/azurerm v3.0.2... - Installed hashicorp/azurerm v3.0.2 (signed by HashiCorp) Terraform has been successfully initialized! You may now begin working with Terraform. Try running “terraform plan ” to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. Format and validate the configuration We recommend using consistent formatting in all of your configuration files. The terraform fmt command automatically updates configurations in the current directory for readability and consistency. Format your configuration. Terraform will print out the names of the files it modified, if any. In this case, your configuration file was already formatted correctly, so Terraform won’t return any file names. terraform fmt You can also make sure your configuration is syntactically valid and internally consistent by using the terraform validate command. The example configuration provided above is valid, so Terraform will return a success message. terraform validate Success! The configuration is valid. Apply your Terraform Configuration Run the terraform apply command to apply your configuration. This output shows the execution plan and will prompt you for approval before proceeding. If anything in the plan seems incorrect or dangerous, it is safe to abort here with no changes made to your infrastructure. Type yes at the confirmation prompt to proceed. terraform apply An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the action of creating a resource group: azurerm_resource_group.rg will be created + resource "azurerm_resource_group" "rg" { + id = (known after apply) + location = "westus2" + name = "myTFResourceGroup" } Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes azurerm_resource_group.rg: Creating... azurerm_resource_group.rg: Creation complete after 1s [id=/subscriptions/c9ed8610-47a3-4107-a2b2-a322114dfb29/resourceGroups/myTFResourceGroup] Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Navigate to the Azure portal in your web browser to check to make sure the resource group was created. Inspect your state When you apply your configuration, Terraform writes data into a file called terraform.tfstate . This file contains the IDs and properties of the resources Terraform created so that it can manage or destroy those resources going forward. Your state file contains all of the data in your configuration and could also contain sensitive values in plaintext, so do not share it or check it in to source control. Inspect the current state using terraform show. terraform show azurerm_resource_group.rg: resource "azurerm_resource_group" "rg" { id = "/subscriptions/c9ed8610-47a3-4107-a2b2-a322114dfb29/resourceGroups/myTFResourceGroup" location = "westus2" name = "myTFResourceGroup" } When Terraform created this resource group, it also gathered the resource’s properties and meta-data. These values can be referenced to configure other resources or outputs. To review the information in your state file, use the state command. If you have a long state file, you can see a list of the resources you created with Terraform by using the list subcommand. terraform state  list azurerm_resource_group.rg If you run terraform state, you will see a full list of available commands to view and manipulate the configuration’s state. terraform state Usage: terraform state   [options] [args] This command has subcommands for advanced state management. These subcommands can be used to slice and dice the Terraform state. This is sometimes necessary in advanced cases. For your safety, all state management commands that modify the state create a timestamped backup of the state prior to making modifications. The structure and output of the commands is specifically tailored to work well with the common Unix utilities such as grep, awk, etc. We recommend using those tools to perform more advanced state tasks. Terraform Destroy Lastly, issue the terraform destroy command to complete the lifecycle and undo the changes that you made. Terraform keeps a state of the changes you made in the terraform state file so it knows exactly which ones to undo. terraform destroy # azurerm_resource_group.rg will be destroyed resource "azurerm_resource_group" "rg" { id = "/subscriptions/b7b18fdb-6e24-4934-a25e-2957c9e62d05/resourceGroups/myTFResourceGroup" -> null location = "westus2" -> null name = "myTFResourceGroup" -> null tags = {} -> null } Plan: 0 to add, 0 to change, 1 to destroy. Do you really want to destroy all resources? Summary You have now completed your very first terraform lifecycle. Congratulations! It's fairly simple, the configuration files get more complex from here but the steps and lifecycle remain the same. We just created a resource group in Azure, but we will continue the terraform exercises by doing something a little more complex and deploying a honeypot using terraform. Tyler Wall is the founder of Cyber NOW Education. He holds bills for a Master of Science from Purdue University and also CISSP, CCSK, CFSR, CEH, Sec+, Net+, and A+ certifications. He mastered the SOC after having held every position from analyst to architect and is the author of three books, 100+ professional articles, four online courses, and regularly holds webinars for new cybersecurity talent. You can connect with him on LinkedIn . To view my dozens of courses, visit my homepage  and watch the trailers! Become a Black Badge member of Cyber NOW® and enjoy all-access for life. Check out my latest book, Jump-start Your SOC Analyst Career: A Roadmap to Cybersecurity Success , winner of the 2024 Cybersecurity Excellence Awards.

  • Azure Infrastructure as Code - Part One

    Azure Infrastructure as Code In this series of blog posts we're going to get hands-on with Cloud Security. One of the biggest challenges that people face is that they can't get a job in Cloud Security because they don't have experience, and since they don't have experience they can't get a job. This series will focus on Azure Cybersecurity Labs. Cloud computing has grown leaps and bounds in the last decade and most if not all companies are migrating to one of the big three players in the Cloud: AWS, Azure, and GCP. While most companies operate in a multi-cloud approach, meaning they are operating in two or more of the big three, we will be focusing specifically on Azure in these labs. I am an advocate of the Microsoft Cloud and I feel its the safest bet for your career as most large enterprise have an Active Directory infrastructure and it makes the most sense for those companies to move into the Azure cloud. I am betting my future that Azure will dominate the cloud market by the end of 2020s. Microsoft has a holistic solution for not only managing infrastructure in the cloud but their cloud security products aren't too shabby. I enjoy using the Defender suite of products and I know for a fact they're being widely adopted everywhere and they will be the standard security tooling in the future for many, many large enterprises. By the end of this series, you will be able to say you have experience with deploying and managing Azure infrastructure as code, scanning infrastructure code for misconfigurations, using open source tools to scan your Azure environment against security best practices. Cloud Security certifications are important but what's more important is that you have hands-on experience with the Cloud and understand why the certification bodies think this information is important to know. BELIEVE ME, it won't make sense completely by just studying for an exam. You have to do it for yourself for it to click. At least, that's how it was for me. And then you can put on your resume REAL experience that you've gained and will work for you as you apply for your next job, or you can create Fiverr or Upwork services to conduct independent assessments for small-to-medium sized businesses. I am excited to start this journey with you guys, and if you didn't already complete the lab posted yesterday for the honeypot project, then your first task is to sign up and get your free credits from Azure . The credits are valid for a month and I hope to have this wrapped up before they expire but no promises! Talk to you soon. Tyler Wall is the founder of Cyber NOW Education. He holds bills for a Master of Science from Purdue University and also CISSP, CCSK, CFSR, CEH, Sec+, Net+, and A+ certifications. He mastered the SOC after having held every position from analyst to architect and is the author of three books, 100+ professional articles, four online courses, and regularly holds webinars for new cybersecurity talent. You can connect with him on LinkedIn . To view my dozens of courses, visit my homepage  and watch the trailers! Become a Black Badge member of Cyber NOW® and enjoy all-access for life. Check out my latest book, Jump-start Your SOC Analyst Career: A Roadmap to Cybersecurity Success , winner of the 2024 Cybersecurity Excellence Awards.

  • Azure Infrastructure as Code - Part Two

    Azure Infrastructure as Code - Part Two The first thing that we will be covering in this course - Azure Infrastructure as Code - is what is infrastructure as code and why is it important? Infrastructure as Code (IaC) is about using code to manage the computing infrastructure in the cloud rather than pointing and clicking and using the GUI. This includes things like operating systems, databases, and storage to name a few. Traditionally, we had to spend lots of time setting up and maintaining infrastructure... going through lengthy processes when we wanted to create something new or delete entire environments. With IaC, you can define what you want your infrastructure to look like with code without worrying about all the detailed steps to get there. For instance, you can just say that you want a Debian server with 12gb of ram and 80gb of hard drive space and it figures out everything it needs to do to make that happen. Benefits of Infrastructure as Code Automation is a key goal in computing, and IaC is a way to automate infrastructure management. There are several benefits of using IaC and one of this is easy environment duplication. You can use the same IaC to deploy an environment in one location that you do in another. If a business has IaC describing its entire regional branch's environment, including servers and networking, they can just copy and paste the code then execute it again to set up a new branch location. Another benefit to using IaC is reduced configuration errors. Manual configurations are error-prone due to human mistakes so having it automated with IaC it reduces the error. It also makes error checking more streamlined. Later in this course we will be using tools to check IaC configurations for issues, but for now, just know you can take the piece of IaC code and evaluate it for misconfigurations before you actually deploy it. The last benefit I want to cover for IaC is the ability to build and branch on environments easily. For instance, if a new feature like a machine learning module is invented, developers can branch the IaC to deploy and test it without affecting the main application. How does IaC work? IaC works by describing a system's architecture and functionality, just like software code describes an application. It uses configuration files treated like source code to manage virtualized resources in the cloud. These configuration files can be maintained under source control and part of the overall codebase. Immutable vs. Mutable Infrastructure There are two approaches to IaC: mutable and immutable infrastructure. In mutable infrastructure, components are changed in production while the service continues to operate normally. With immutable infrastructure, components and are set and assembled to create a full service or application. If any change is required, the entire set of components has to be deleted and redeployed fully to be updated. Approaches to IaC There are two basic approaches to IaC: declarative and imperative. Declarative describes the desired end state of a system, and the IaC solution creates it accordingly. Its simple to use if the developer knows what components and settings are needed. Imperative describes all the steps to set up resources to reach the desired running state. It's more complex but necessary for intricate infrastructure deployments where the order of events matter. Terraform IaC An open-source tool, Terraform , takes an immutable declarative approach and uses its own language Hashicorp Configuration Language (HCL). HCL is based on Go and is considered one of the easiest languages to pick up for IaC.  I have the Terraform Associate certification and it took me all of three days to pick up the language. By the end of these labs, I'd highly suggest you picking up a study guide for the exam since you'll already be 2/3rds of the way there. With Terraform , you can use the same configuration for multiple cloud providers. And since many organizations today opt for the hybrid cloud model , Terraform can easily be called the most popular IaC tool. Terraform is capable of both provisioning and configuration management, but it’s inherently a provisioning tool that uses cloud provider APIs to manage required resources. And since it natively and easily handles the orchestration of new infrastructure, it’s more equipped to build immutable infrastructures, where you have to replace components fully to make changes. Terraform uses state files to manage infrastructure resources and track changes. State files record everything Terraform builds, so you can easily refer to them. We'll get more into this later. Often considered an obvious choice for an IaC tool, Terraform is what we will be using in this course.  So let's get started. Tyler Wall is the founder of Cyber NOW Education. He holds bills for a Master of Science from Purdue University and also CISSP, CCSK, CFSR, CEH, Sec+, Net+, and A+ certifications. He mastered the SOC after having held every position from analyst to architect and is the author of three books, 100+ professional articles, four online courses, and regularly holds webinars for new cybersecurity talent. You can connect with him on LinkedIn . To view my dozens of courses, visit my homepage  and watch the trailers! Become a Black Badge member of Cyber NOW® and enjoy all-access for life. Check out my latest book, Jump-start Your SOC Analyst Career: A Roadmap to Cybersecurity Success , winner of the 2024 Cybersecurity Excellence Awards.

Get Your Dream Cybersecurity Job

Courses  :  Certifications  :  Cyber Range  :  Job Boards  :  Knowledge Base  :  Webinars  :  WhatsApp Community

soc analyst

Get the new book, Jump-start Your SOC Analyst Career, authored by Tyler Wall.  

 

Winner of the 2024 Cybersecurity Excellence Awards in the category of Best Cybersecurity Book!

  • LinkedIn
  • Facebook

Contact us

bottom of page