Posts

Image
Euclidean Distance in NR-V2X Mode 2 In wireless networks like NR-V2X Mode 2 (New Radio Vehicle-to-Everything, sidelink mode where vehicles autonomously select resources), measuring the distance between vehicles or signals is critical for efficient communication. One common metric is the Euclidean distance . What is Euclidean Distance? Euclidean distance is the "straight-line" distance between two points in space. Mathematically, for two points P1(x1, y1) and P2(x2, y2) in 2D space, it is: d = √((x2 - x1)² + (y2 - y1)²) In 3D, or higher dimensions, you just add more squared differences for each coordinate. Why is it Useful in NR-V2X Mode 2? Resource Selection: Vehicles in Mode 2 autonomously pick radio resources. Knowing the Euclidean distance between vehicles helps avoid interference, because distant vehicles can reuse the same resources without collision. Collision Avoidance: Signals from nearby vehicles are more likely to collide. By ...
Image
Understanding Maximum Network Resources In wireless networks and communication systems, devices like smartphones, smart vehicles, or IoT sensors need resources to communicate effectively. These resources are the "tools" the network provides so data can flow smoothly. Bandwidth: Think of bandwidth as the width of a highway. A wider highway can allow more cars (data) to travel simultaneously. Higher bandwidth means more data can be sent at the same time. Data Rate: This is how fast the data moves across the network, like the speed of a car on the highway. Higher data rates mean information reaches the destination faster. Maximum Resource Allocation: Every network has limits. The maximum amount of resources refers to the upper limit the network can give a device at a time, such as the largest chunk of bandwidth or the fastest data rate it can handle. When planning networks or designing algorithms like Deep Q-Networks (DQN) for smart v...
Image
Understanding Video & App Performance Metrics Evaluating user experience goes beyond just network speed. Key metrics include PSNR , video freeze duration , and application-level throughput . These metrics together help us understand the Quality of Experience (QoE) . Figure: Network throughput affects PSNR (video quality) and video freeze duration, both contributing to QoE. 1. Peak Signal-to-Noise Ratio (PSNR) PSNR measures the quality of video or images after transmission. Higher PSNR means clearer, sharper video; lower PSNR leads to blurry or noisy playback. 2. Video Freeze Duration Video freeze duration is the total time a video pauses or stalls during playback. Long freezes cause frustration, even if the rest of the video plays smoothly. 3. Application-Level Throughput This measures the data successfully delivered to the application per unit time . High throughput ensures smooth playback; low throughpu...
Image
Understanding QoE, QoS, and Network Throughput In networking, several metrics determine how well a network performs and how satisfied users feel. Three key metrics are: Quality of Experience (QoE): Measures the user’s satisfaction with the service. Subjective and user-centric. Quality of Service (QoS): Measures network performance parameters like latency, jitter, and packet loss. Network-centric and technical. Network Throughput: Measures the amount of data transmitted per unit time . Network capacity-focused. Figure: Visualization of QoE (user satisfaction), QoS (network performance), and Throughput (data rate capacity). Quick Examples: QoE: How smooth a video call feels to the user. QoS: Ensuring low latency and minimal packet loss during a VoIP call. Throughput: Measuring 50 Mbps download speed on Wi-Fi. Analogy for Easy Learning: Think of a water system: Throughput → size of the pipe (how much water can flow) QoS → reg...
Image
Understanding Coalition vs Non-Cooperative Games in Networks In multi-agent systems like vehicular networks, agents (vehicles, nodes, etc.) make strategic decisions. Game theory helps us model these interactions. Two key types are Coalition (Cooperative) Games and Non-Cooperative Games . Coalition (Cooperative) Games Players cooperate and form coalitions to maximize joint benefits. Goal: Maximize total payoff together. Binding: Agreements enforceable among coalition members. Example: Vehicles share spectrum to reduce interference. Benefit: Better overall network efficiency and fairness. Non-Cooperative Games Players act independently , trying to maximize their own utility. Goal: Each player maximizes individual payoff. Binding: No enforceable agreements. Example: Vehicles choose channels individually; may cause congestion. Analysis: Look for Nash Equilibrium , where...
Image
Federated Learning (FL) for DQN: Learning Together Without Sharing Data Imagine hundreds of smart vehicles on the road, each trying to decide the best action : which channel to use, how much power to transmit, or when to handover to a new base station. Each vehicle runs its own Deep Q-Network (DQN) . Here, think of each vehicle as a “cell” deploying its own DQN agent to continuously learn optimal communication and resource allocation policies over time. But here’s the challenge: collecting all raw experience data from every vehicle centrally is impractical — it’s too much data and privacy matters. Why Federated Learning? Federated Learning trains the model locally on each vehicle and periodically shares only the model updates to form a global model without exchanging raw user data . This ensures privacy, reduces bandwidth usage, and still allows learning from everyone’s experience. How It Works (Step by Step) Local Learning: Each vehicle (cell) trains its DQN on ...
Image
Q-Learning, Deep Q-Networks (DQN), and Their Role in NR-V2X 1. What Is Q‑Learning? (Simple) Imagine a robot navigating a maze. At each position ( state ), it can take an action (move up/down/left/right). Some actions give rewards (like +10 for reaching the goal), others give penalties. The robot doesn’t know the best path at first — it must learn by trying actions and observing rewards. Q-Learning helps the robot learn the value of taking each action in each state, stored in a Q-table: Q(s,a) – the robot’s current estimate: “If I’m in state s and take action a , how good is it long-term?” Q*(s,a) – the optimal total reward: “If I take action a in state s and then act optimally forever after, how much total reward could I get?” Over many trials, Q(s,a) updates to approach Q*(s,a), helping the robot learn the best action in each situation. Figure 1: Robot navigating a maze, showing states, actions, and rewards — the intuition behind Q(s,a) and Q*(s,...