How to Improve Security Standards in Education

Explore top LinkedIn content from expert professionals.

Summary

Improving security standards in education means protecting students, staff, and sensitive data from physical or digital harm by updating policies, technology, and daily practices in schools and universities. This includes steps to ensure safety in classrooms, online environments, and when using devices, so everyone can learn in a secure setting.

  • Prioritize real-world testing: Evaluate new security technology with hands-on trials in your school’s actual environment and gather feedback from staff who will use these tools every day.
  • Update digital safety practices: Regularly review and revise school rules to address emerging threats like AI-generated images, and protect student data by minimizing public sharing of photos or information.
  • Include hands-on cybersecurity learning: Shift from purely theoretical education to practical, experience-based training in cybersecurity so students and staff are prepared to recognize and tackle real-world risks.
Summarized by AI based on LinkedIn member posts
  • View profile for Dr. Kenneth S. Trump

    Helping School Superintendents and Attorneys Navigate School Safety, Security, Crisis Response, & Litigation | Expert Witness | Consultant | Speaker | Author | President, National School Safety & Security Services

    5,411 followers

    Buyer Beware: As private equity creeps deeper into the #schoolsecurity product and technology market, smaller companies get gobbled up into bigger entities. Bundled services. Louder marketing. Lobbyists hired. But quality risks going downhill. Our team worked with one district where a once-reputable visitor management system failed miserably across dozens of schools — challenges with service, unresolved problems, and frustration among #schoolleaders everywhere. As I told ABC News in their recent network investigation, “I call it #securitytheater. We often find huge gaps between how security products and technology are marketed and how they actually work — or don’t.” Veteran #schoolsafety professional Curt Lavarello made a similar observation: “All of this technology is very, very expensive... and many products may not necessarily do what they’re being sold to do.” What can school leaders do? 🔹 Closing the Gap Between Marketing and Reality in School Security Tech ✅ Verify before you buy: Require live demonstrations in your actual school environment — not just vendor videos or conference booths. ✅ Talk to other districts: Contact peer school systems directly for candid, real-world feedback on performance and service. ✅ Pilot first, purchase later: Test products in a few schools under normal operating conditions before committing districtwide. ✅ Include front-line voices: Get input from principals, teachers, and security staff who actually use the systems daily. ✅ Check service and support history: Ask vendors for documented response times and maintenance logs. ✅ Evaluate total cost of ownership: Factor in upgrades, repairs, and ongoing subscription or licensing fees — not just purchase price. ✅ Demand data, not promises: Require measurable performance metrics and hold vendors accountable for outcomes. ✅ Maintain human-centered balance: Reinforce that no technology replaces staff vigilance, supervision, and relationship-building. ✅ Audit periodically: Conduct third-party performance reviews to confirm systems are still functioning as intended. ✅ Keep control local: Don’t let product and tech vendors — or private equity owners — dictate what “security” should look like in your schools. 📖 Read more from this story that picked up on the ABC News investigative story: https://zurl.co/IDR5w

  • View profile for Maryam Shuaibu Aliyu (Cyber Hijabi)

    I help organizations and communities manage cyber risk and build security awareness.| Security Awareness Trainer| ISO/IEC 27001 |Cyber Risk& Compliance| Founder, Cyber Hijabi Tech

    12,871 followers

    I saw a tweet yesterday that said, “Cybersecurity in universities is too theoretical. Graduates finish without real hands-on skills.” And honestly? For many Nigerian universities, this is a hard truth we need to confront. Cybersecurity cannot be mastered from slides alone. It’s a practical, problem-solving field that requires exposure, labs, mistakes, and simulations. So… what is the way forward for Nigerian universities? Here’s what must change 🔹 1. From Theory-Heavy to Practice-Driven Curriculum, Universities should redesign cybersecurity courses to include, Virtual labs. Simulated cyber-attacks & defenses. Incident response exercises. Risk assessment case studies. Cybersecurity is learned by doing, not memorizing. 🔹 2. Industry Academia Collaboration Universities should actively partner with, Cybersecurity firms. Tech hubs. Security professionals. Guest lectures, real-world case studies, and joint projects expose students to current threats, not outdated examples. 🔹 3. Integration of Open-Source & Low-Cost Tools Hands-on learning doesn’t require expensive licenses. Kali Linux. OWASP tools. Open-source SIEMs. Cloud security sandboxes. With creativity, labs can run even on low-resource systems. 🔹 4. Certification-Aligned Learning Paths Courses should align with globally recognized certifications, Security fundamentals. Risk & compliance. Blue team / defensive security. This makes graduates job-ready, not just degree-holders. 🔹 5. Empower Lecturers Through Continuous Upskilling You can’t teach modern cybersecurity with outdated skills. Lecturers need, Regular industry exposure. Hands-on retraining. Access to current threat intelligence. 🔹 6. Encourage Student-Led Cyber Communities Universities should support, Cybersecurity clubs. Capture-the-Flag (CTF) challenges. Peer-to-peer learning. Hackathons. This is where real learning often happens. 🔹 7. Policy & Curriculum Reform at the National Level Bodies like National Universities Commission must, Review cybersecurity curricula regularly Enforce minimum practical lab requirements Align education with national digital security goals If Nigerian universities continue producing theory-only cybersecurity graduates, the skills gap will keep widening and the industry will keep looking elsewhere. The future of cybersecurity education in Nigeria must be, Practical. Industry-aligned. Inclusive. Future-focused. Over to you, Did your university teach cybersecurity hands-on? What practical changes would you like to see? Let’s start the conversation. Because the future of cybersecurity depends on how we teach it today. #Cybersecurity #Mondaymotivation #Universitydegree.

  • View profile for Parven Kaur

    Founder of Award-Winning Digital Parenting Platform, Kids N Clicks | Empowering Employers with Digital Parenting Support | Supporting Schools & Safeguarding Leads in Online Safety | Digital Inclusion Scotland

    7,535 followers

    Guardrails are not enough Do note that platforms like ChatGPT and Gemini do have censorship guardrails in place. They are not completely unregulated. However, recent findings show these safety measures are not very hard to bypass. Reports indicate that users can trick these models using "basic prompts in plain English" to remove clothing from photos. On forums like Reddit, users have shared simple instructions on how to fool the AI, proving that while policies exist, they can be "trivial to bypass". The "Locked Door" Problem We have spent years working on the Online Safety Act and age-gating to block the "front door" to adult websites. We wanted to stop young people from consuming harmful content. But we forgot about image generation. While we locked the front door, AI opened a side window. Young people adopt technology faster than adults, and they now have access to tools that can generate photorealistic content. Standard safety filters often block "adult sites," but they miss these AI tools and "nudify" apps. A Wake-Up Call for Schools: You might be providing the source material To generate a deepfake that "preserves facial identity," the AI needs a clear source photo. For years, schools have celebrated students by posting high-resolution photos of sports days and awards on public social media pages. This needs to stop. By posting these photos publicly, we are providing the raw material that bad actors or even other students can use. When you combine accessible student photos with AI tools that are easy to trick, the risk is immediate. The New Safety Standard We need to change the rules immediately: 1. Update Bullying Policies: Schools must specifically ban "AI-generated imagery" in their bullying and harassment policies. 2. Stop Posting Photos: Schools need to treat student faces as sensitive personal data, not marketing content for social media. 3. A New Conversation for Parents: It is no longer just "what are you watching?" It is now "what are you creating?" Parents need to know that a smartphone is now a production studio. We are teaching students online safety, but are we practicing it with their images? Share this with another teacher or parent in your network

  • View profile for Gareth Young

    Founder & Chief Architect, Levacloud | Microsoft 365 Security & Compliance | Defender · Intune · Purview

    8,293 followers

    Microsoft Intune is incredibly powerful, which is exactly why its admin layer can’t be an afterthought. When Intune admin controls are weak, one compromised account or misclick can impact every managed device. Three practices I see underused, especially in K–12: • Design Intune access around real jobs, not blanket roles Global Admin and Intune Administrator shouldn’t be everyday roles. Use Intune RBAC to match real job functions, scope admins only to the users/devices they support, and rely on scope tags/scope groups for multi‑school or multi‑region environments. • Treat privileged Intune access as high‑risk, every time If an account can wipe devices, deploy scripts, or change RBAC, a password alone isn’t enough. Require phishing‑resistant MFA, compliant devices, and risk‑based Conditional Access for all admin portals. Use Entra PIM so elevated rights are time‑bound and auditable instead of permanently on. • Add a second set of eyes for high‑impact changes Multi‑Admin Approval in Intune is brand new, so almost no one has it wired in yet. Start small by putting your riskiest workflows (RBAC changes, device wipes, script deployments) behind approval, with clear approvers and a simple break‑glass path for true emergencies. Get these in place and Intune shifts from “we trust our admins” to security by design: smaller blast radius, stronger identity assurance, and guardrails around the changes that actually matter. For K–12, where student data and classroom devices are mission‑critical, that’s a worthwhile upgrade. #K12IT #K12EdTech #K12Cybersecurity #EducationSecurity #MicrosoftIntune #EndpointManagement #DeviceManagement #ZeroTrust #IdentitySecurity #EntraID #SchoolIT #DistrictIT #TechLeadership #CIO #CTO #StudentDataProtection #EdTechLeadership

  • View profile for Grace Chng

    Author & Storyteller, Champion for women in tech and women in sports; ,

    1,788 followers

    I believe technology can be harnessed for good. But horrors! Recent news headlines remind us of the urgent need for stronger online safety and digital ethics education. - Some boys at Singapore Sports School have been alleged to create and share deepfake nude photos of female students. Police are investigating.  - Scam victims in Singapore are on track to lose over S$770 million by year-end. - Singapore police may be given the authority to prevent repeat scam victims from making online banking transactions to protect them from further financial loss. Alarmingly, some victims continue to transfer money to scammers even when being informed of the fraud – a scenario that once seemed unimaginable!  These troubling incidents highlight persistent challenges surrounding online behaviour, ethics, and safety—issues that have existed since the dawn of the Internet. Singapore has been proactive about these issues from the early days of the internet. In the early 2000s, I was part of  PAGI (Parents Advisory Group for the Internet) which was set up to support parents guide their children on safe online practices, and the Media Literacy Council to help Singaporeans evaluate media and create and share content  safely and responsibly. However, technology especially Generative AI is evolving rapidly. Our digital-native generation, born into a world of bright shiny digital screens and AI, needs a deep understanding of online safety and digital ethics. What can we do to build a strong foundation in online safety hygiene and digital ethics? 1.Integrate Online Safety into Education: Digital safety and awareness must start early—as soon as children enter kindergarten and continue throughout their school years. Kids today are adept at using smartphones and tablets as young as five, so online safety must be part of their foundational learning. 2. Teach Ethics and Responsible Technology Use: Responsible use of digital devices and platforms should be an integral part of schools’ digital safety and awareness curriculum. This is necessary to instil ethical guidelines and a clear understanding of consequences. 3. Pair Technology Access with Safety Training: Digital devices are essential tools for modern learning. However, it’s equally crucial that teachers guide students on online safety and responsible device use. Teaching safe, mindful use of technology isn’t just nice to have, it is essential, not optional. 4. Engage Parents in Digital Safety Education: Parents are vital to fostering online safety and digital ethics. As primary role models, the onues lies with them to guide their children’s digital habits. To do this effectively, parents need to be equipped with the knowledge and resources to promote safe and responsible device use. Creating a digitally responsible society requires a multi-faceted approach that begins with education and active involvement from all stakeholders. It’s time we double down on these efforts. #onlinesafety #digitalethics

  • View profile for Nicole Williamson

    CEO - ECP Safeguarding, UK - Safeguarding - Child Protection - Training - Consultant - Specialist

    12,824 followers

    🔐 Safeguarding & AI in Education: What leaders need to know As AI tools become more common in schools and colleges, safeguarding professionals must stay alert to new risks and responsibilities. Here's a quick summary of key points from the latest guidance: 📌 AI use must follow safeguarding laws Tools must comply with Keeping Children Safe in Education and DfE product safety standards. Most free AI tools aren't safe for student use. “Safeguarding is everyone’s responsibility and should be the top priority when deciding whether and how to use generative AI in an educational setting.” 📌 Online safety policies must evolve AI can amplify offline behaviour issues. Update behaviour and online safety policies to reflect new risks like deepfakes and sexual extortion. 📌 Data protection is critical Photos of students are personal data. Schools must have a lawful basis for using them and clear privacy notices. “Photos that identify individual children are regarded as personal data.” 📌 Staff training is essential Regular training helps staff spot and respond to AI-related risks. This includes recognising inappropriate content, cyberbullying, and exploitation. 📌 Mental health tools need regulation Apps supporting mental health must be regulated by the Medicines and Healthcare Products Regulatory Authority. 📌 AI decisions need human oversight AI can support decision-making but must not replace it. No student outcome should be decided by AI alone. “Transparency and human oversight are essential to ensure AI systems assist, but do not replace, human decision-making.” 📌 Annual reviews are a must Schools should review filtering and monitoring systems yearly to stay ahead of emerging threats. 🛡️ A key read prior to the release of KCSiE 2025: https://lnkd.in/dmZjvrKU #Safeguarding #EducationAI #OnlineSafety #ChildProtection #SchoolLeadership #AIinEducation

  • View profile for David DellaPelle

    CEO & Co-Founder at Dune Security. Prevent Social Engineering and Insider Threat Across Every Channel. We are #hiring.

    14,300 followers

    When every student, faculty member, and researcher has a different risk profile, generic training isn’t enough. 🎓 At Stevens Institute of Technology, legacy SAT tools treated everyone the same – ignoring the fact that an AI researcher, a financial aid officer, and a facilities staffer face very different threats. CISO Jeremy Livingston needed a modern approach, and that’s exactly what Dune Security delivered: → Department- and individual-level risk scoring for the first time  → Training content tailored to higher education and research environments  → Seamless integrations with Workday and Microsoft for real-time updates  → High-risk users automatically flagged for targeted follow-up The result? Stevens went from generic compliance to proactive risk management – with measurable visibility into user-layer vulnerabilities. As Jeremy puts it:  “We can’t keep doing the same thing and expect different results. Dune Security is doing something new and exciting — and they’re showing quantifiable outcomes. That’s been the key for us.” Full case study here: https://bit.ly/3Kp6tV0

Explore categories